You are viewing a plain text version of this content. The canonical link for it is here.
Posted to builds@beam.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2021/09/18 00:50:03 UTC

Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2438

See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2438/display/redirect?page=changes>

Changes:

[noreply] Minor: Prune docker volumes in Inventory job(#15532)


------------------------------------------
[...truncated 340.14 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 2e2b2489cf9d882242349157c9769a31
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 18, 2021 12:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 18, 2021 12:45:20 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 18, 2021 12:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 18, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 18, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 18, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@945711342]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1394176916]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 18, 2021 12:45:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 18, 2021 12:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 18, 2021 12:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 18, 2021 12:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 18, 2021 12:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 18, 2021 12:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5940345697793417465.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-T8rrH2TELNbuh4XgaDr5Y23V20HI5R9g9FbPaOVIv30.jar
    Sep 18, 2021 12:45:31 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 18, 2021 12:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 18, 2021 12:45:31 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 96581bf42c7128cd2a50f61fe3911b17cca1e72d224acb344417760ce6e5ebb6> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-llgb9CxxKM0qUPYf45EbF8yh5y0iSss0RBd2DObl67Y.pb
    Sep 18, 2021 12:45:34 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 18, 2021 12:45:34 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 18, 2021 12:45:34 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 18, 2021 12:45:34 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 18, 2021 12:45:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-17_17_45_35-301988535343975277?project=apache-beam-testing
    Sep 18, 2021 12:45:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-17_17_45_35-301988535343975277
    Sep 18, 2021 12:45:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-17_17_45_35-301988535343975277
    Sep 18, 2021 12:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-18T00:45:38.481Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 18, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:45:48.886Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 18, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:45:49.838Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 18, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:45:49.912Z: Expanding GroupByKey operations into optimizable parts.
    Sep 18, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:45:49.945Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 18, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:45:50.014Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 18, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:45:50.038Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 18, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:45:50.066Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 18, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:45:50.438Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 18, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:45:50.832Z: Starting 5 workers in us-central1-c...
    Sep 18, 2021 12:46:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:46:12.224Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 18, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:46:35.610Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 18, 2021 12:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:47:01.755Z: Workers have started successfully.
    Sep 18, 2021 12:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:47:01.792Z: Workers have started successfully.
    Sep 18, 2021 12:47:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:47:31.372Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 18, 2021 12:47:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:47:31.504Z: Cleaning up.
    Sep 18, 2021 12:47:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:47:31.614Z: Stopping worker pool...
    Sep 18, 2021 12:49:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:49:53.807Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 18, 2021 12:49:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T00:49:53.850Z: Worker pool stopped.
    Sep 18, 2021 12:50:00 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-17_17_45_35-301988535343975277 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 5054a4b3-af77-46fa-95ea-0d11e7efa0d0 and timestamp: 2021-09-18T00:50:00.579000000Z:
                     Metric:                    Value:
                   read_time                     8.551
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 18, 2021 12:50:01 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 45.26 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 38s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/7ronkkvanjuec

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2688

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2688/display/redirect>

Changes:


------------------------------------------
[...truncated 341.22 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 34b8fd0a3be99d47664fff26609317a1
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.36.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.36.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 19, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 19, 2021 12:45:09 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 19, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 19, 2021 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 19, 2021 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 19, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 19, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 19, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 19, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 19, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1535841857]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 19, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 19, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 19, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 19, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 19, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 19, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 19, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2311926]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 19, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 19, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 19, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 19, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 19, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 19, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 19, 2021 12:45:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 19, 2021 12:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 19, 2021 12:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 19, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 19, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.36.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 19, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8852894522611761953.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-P3K73HN3CQX9zz5Z9-3Ig3gsKzfJ2DOy_g3i-LDbxPk.jar
    Nov 19, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 19, 2021 12:45:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 19, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106192 bytes, hash 30eb5a73ae62326fa74a3927bb19af3bd75b6306dc874cef6b51f43e43541868> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-MOtac65iMm-nSjknuxmvO9dbYwbch0zva1H0PkNUGGg.pb
    Nov 19, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 19, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 19, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 19, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.36.0-SNAPSHOT
    Nov 19, 2021 12:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-19_04_45_21-1383322531203372873?project=apache-beam-testing
    Nov 19, 2021 12:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-19_04_45_21-1383322531203372873
    Nov 19, 2021 12:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-19_04_45_21-1383322531203372873
    Nov 19, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-19T12:45:24.883Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 19, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T12:45:32.605Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 19, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T12:45:33.297Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 19, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T12:45:33.344Z: Expanding GroupByKey operations into optimizable parts.
    Nov 19, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T12:45:33.379Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 19, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T12:45:33.469Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 19, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T12:45:33.497Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 19, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T12:45:33.529Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 19, 2021 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T12:45:34.568Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 19, 2021 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T12:45:34.701Z: Starting 5 workers in us-central1-c...
    Nov 19, 2021 12:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T12:46:03.849Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 19, 2021 12:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T12:46:24.549Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 19, 2021 12:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T12:46:50.583Z: Workers have started successfully.
    Nov 19, 2021 12:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T12:46:50.610Z: Workers have started successfully.
    Nov 19, 2021 12:47:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T12:47:22.107Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 19, 2021 12:47:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T12:47:22.480Z: Cleaning up.
    Nov 19, 2021 12:47:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T12:47:22.580Z: Stopping worker pool...
    Nov 19, 2021 12:49:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T12:49:53.446Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 19, 2021 12:49:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T12:49:53.512Z: Worker pool stopped.
    Nov 19, 2021 12:49:59 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-19_04_45_21-1383322531203372873 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): ed735472-d18f-456c-bf6c-a61b4084346a and timestamp: 2021-11-19T12:49:59.770000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.545

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 19, 2021 12:49:59 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 55.002 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 42s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/6mbyqyhiksuxm

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2687

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2687/display/redirect>

Changes:


------------------------------------------
[...truncated 348.85 KB...]
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':',5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) started.
Gradle Test Executor 4 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 34b8fd0a3be99d47664fff26609317a1
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 4'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.36.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 4'
Successfully started process 'Gradle Test Executor 4'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.36.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 19, 2021 6:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 19, 2021 6:45:16 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 19, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 19, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 19, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 19, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 19, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 19, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 19, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 19, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1054139362]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 19, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 19, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 19, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 19, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 19, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 19, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 19, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1291767819]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 19, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 19, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 19, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 19, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 19, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 19, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 19, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 19, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 19, 2021 6:45:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 19, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 19, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.36.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 19, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2855448962100750781.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-epd2hK-tHCfMJ3m7gX3WLGH7jWd_z6-kRzVODQ3ZCew.jar
    Nov 19, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mongodb/mongo-java-driver/3.12.7/1f45c6a397feeb46da75425619333d1cc6f90f78/mongo-java-driver-3.12.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/mongo-java-driver-3.12.7-D_zgBJWNb9mzmuetJ37a0X9XtpcfSGsXYpxe6eE8Tao.jar
    Nov 19, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 250 files cached, 2 files newly uploaded in 0 seconds
    Nov 19, 2021 6:45:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 19, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106192 bytes, hash 7bdf89b84d9963b60a4a8a8c75b4498172620ba9234af9f3398d2a4fcd488b58> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-e9-JuE2ZY7YKSoqMdbRJgXJiC6kjSvnzOY0qT81Ii1g.pb
    Nov 19, 2021 6:45:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 19, 2021 6:45:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 19, 2021 6:45:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 19, 2021 6:45:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.36.0-SNAPSHOT
    Nov 19, 2021 6:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-18_22_45_28-15970868399584132364?project=apache-beam-testing
    Nov 19, 2021 6:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-18_22_45_28-15970868399584132364
    Nov 19, 2021 6:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-18_22_45_28-15970868399584132364
    Nov 19, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-19T06:45:32.041Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 19, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T06:45:40.079Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 19, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T06:45:40.714Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 19, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T06:45:40.757Z: Expanding GroupByKey operations into optimizable parts.
    Nov 19, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T06:45:40.791Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 19, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T06:45:40.871Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 19, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T06:45:40.905Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 19, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T06:45:40.937Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 19, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T06:45:41.570Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 19, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T06:45:41.638Z: Starting 5 workers in us-central1-c...
    Nov 19, 2021 6:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T06:45:45.341Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 19, 2021 6:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T06:46:26.541Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 19, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T06:46:53.665Z: Workers have started successfully.
    Nov 19, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T06:46:53.694Z: Workers have started successfully.
    Nov 19, 2021 6:47:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T06:47:22.701Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 19, 2021 6:47:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T06:47:22.871Z: Cleaning up.
    Nov 19, 2021 6:47:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T06:47:22.954Z: Stopping worker pool...
    Nov 19, 2021 6:49:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T06:49:41.530Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 19, 2021 6:49:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T06:49:41.635Z: Worker pool stopped.
    Nov 19, 2021 6:49:47 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-18_22_45_28-15970868399584132364 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): a9553ef2-6a4f-415d-bcf2-0a3c97d99ad4 and timestamp: 2021-11-19T06:49:47.669000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.082

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 19, 2021 6:49:47 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 4 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 35.92 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 28s
152 actionable tasks: 102 executed, 50 from cache

Publishing build scan...
https://gradle.com/s/ffzloscqy2mgk

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2686

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2686/display/redirect?page=changes>

Changes:

[avilovpavel6] Support GO sdk

[dpcollins] Fix race condition in Pub/Sub Lite SDF that causes it to error out when

[dpcollins] edit

[dpcollins] Restructure processor to access beam types from downcall thread

[dpcollins] Restructure processor to access beam types from downcall thread

[dpcollins] Restructure processor to access beam types from downcall thread

[noreply] Merge pull request #15956 from [BEAM-13210] [Playground]: support

[noreply] created quickstart guide for multi-language pipelines (Python) (#16001)

[dpcollins] use interface in logger to work around likely SLF4J bug

[dpcollins] revert logger class

[noreply] Revert "Allow wildcards for java class lookup transform providers."

[Valentyn Tymofieiev] Moving to 2.36.0-SNAPSHOT on master branch.


------------------------------------------
[...truncated 352.13 KB...]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 19, 2021 12:51:11 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 19, 2021 12:51:11 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 19, 2021 12:51:12 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 19, 2021 12:51:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 19, 2021 12:51:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 19, 2021 12:51:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 19, 2021 12:51:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 19, 2021 12:51:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 19, 2021 12:51:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 19, 2021 12:51:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1054139362]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 19, 2021 12:51:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 19, 2021 12:51:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 19, 2021 12:51:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 19, 2021 12:51:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 19, 2021 12:51:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 19, 2021 12:51:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 19, 2021 12:51:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1291767819]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 19, 2021 12:51:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 19, 2021 12:51:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 19, 2021 12:51:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 19, 2021 12:51:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 19, 2021 12:51:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 19, 2021 12:51:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 19, 2021 12:51:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 19, 2021 12:51:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 19, 2021 12:51:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 19, 2021 12:51:21 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 19, 2021 12:51:21 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.36.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 19, 2021 12:51:21 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.36.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.36.0-SNAPSHOT-tests-wq460SWma404IxeH4GpbhmTrsPsdyGkKfzx-yBtHWMs.jar
    Nov 19, 2021 12:51:21 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.36.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.36.0-SNAPSHOT-tests-gN4bkJvUYYy5RCwXc4wAbV3nb5fkoyJUU3rNj5CoXCk.jar
    Nov 19, 2021 12:51:21 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.36.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.36.0-SNAPSHOT-Im_LlokEkGcOJP4pI4OGjbpuZu662IAWuMfGbTGYeas.jar
    Nov 19, 2021 12:51:21 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gradle-worker-3NmISVC1NgnyaVJgTCwKmBO_AxY6FGLZgDRZDK08vLc.jar
    Nov 19, 2021 12:51:21 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7852435231379495403.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-RNl0ZX_e_nU6iuOCLR7u_0dLx-Cv0fCnrlKH9sX5VS8.jar
    Nov 19, 2021 12:51:21 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.36.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.36.0-SNAPSHOT-nJooQBOmf91zLQQ9Mr-Fe3ZJqjZmeXLkGPCYKyE5Yao.jar
    Nov 19, 2021 12:51:21 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.36.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.36.0-SNAPSHOT-tDWGgdT04k0kcm0eO5adgiF1BfJHG8lQ1W4REGrrw0g.jar
    Nov 19, 2021 12:51:21 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.36.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.36.0-SNAPSHOT-tests-I-osQrG-lSEES6RD4ibuovIeVl-_U3nOEEsBLo1rYrA.jar
    Nov 19, 2021 12:51:21 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/join-library/build/libs/beam-sdks-java-extensions-join-library-2.36.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-join-library-2.36.0-SNAPSHOT-xz-aSlhizJi8jigGo3amZ2g_MkglE3dBulicsi2UX5U.jar
    Nov 19, 2021 12:51:21 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/udf/build/libs/beam-sdks-java-extensions-sql-udf-2.36.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-udf-2.36.0-SNAPSHOT-MlQQAS1Je1cH4lXtIOKM3zL6o97S5F7sJQ3zUailw08.jar
    Nov 19, 2021 12:51:21 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/google-cloud-bigtable-emulator/0.137.1/b1639aa134de1302e43d9e9c3843f6ff853c510f/google-cloud-bigtable-emulator-0.137.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-cloud-bigtable-emulator-0.137.1-V2SlLU4BjIGf5QoF7YL-A-QNjw49NdXrNM4QoX9MXSI.jar
    Nov 19, 2021 12:51:23 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 241 files cached, 11 files newly uploaded in 2 seconds
    Nov 19, 2021 12:51:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 19, 2021 12:51:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106194 bytes, hash b96a0b5c694758f7cb48f58e530fa9fe8d2dad5017e6703710ce32aa7dc7d9a3> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-uWoLXGlHWPfLSPWOUw-p_o0trVAX5nA3EM4yqn3H2aM.pb
    Nov 19, 2021 12:51:26 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 19, 2021 12:51:26 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 19, 2021 12:51:26 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 19, 2021 12:51:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.36.0-SNAPSHOT
    Nov 19, 2021 12:51:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-18_16_51_26-16279870205932735328?project=apache-beam-testing
    Nov 19, 2021 12:51:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-18_16_51_26-16279870205932735328
    Nov 19, 2021 12:51:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-18_16_51_26-16279870205932735328
    Nov 19, 2021 12:51:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-19T00:51:29.532Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 19, 2021 12:51:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T00:51:39.013Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 19, 2021 12:51:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T00:51:39.759Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 19, 2021 12:51:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T00:51:39.845Z: Expanding GroupByKey operations into optimizable parts.
    Nov 19, 2021 12:51:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T00:51:39.887Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 19, 2021 12:51:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T00:51:39.986Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 19, 2021 12:51:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T00:51:40.010Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 19, 2021 12:51:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T00:51:40.045Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 19, 2021 12:51:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T00:51:40.404Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 19, 2021 12:51:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T00:51:40.487Z: Starting 5 workers in us-central1-c...
    Nov 19, 2021 12:51:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T00:51:57.560Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 19, 2021 12:52:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T00:52:21.170Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 19, 2021 12:52:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T00:52:49.666Z: Workers have started successfully.
    Nov 19, 2021 12:52:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T00:52:49.697Z: Workers have started successfully.
    Nov 19, 2021 12:53:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T00:53:19.513Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 19, 2021 12:53:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T00:53:19.845Z: Cleaning up.
    Nov 19, 2021 12:53:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T00:53:19.990Z: Stopping worker pool...
    Nov 19, 2021 12:55:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T00:55:46.367Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 19, 2021 12:55:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-19T00:55:46.434Z: Worker pool stopped.
    Nov 19, 2021 12:55:51 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-18_16_51_26-16279870205932735328 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 00507297-e8d4-41ce-a4f9-d14564157234 and timestamp: 2021-11-19T00:55:51.976000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.296

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 19, 2021 12:55:52 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 4 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 44.64 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 28s
152 actionable tasks: 104 executed, 48 from cache

Publishing build scan...
https://gradle.com/s/d7vblhqxw44xm

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2685

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2685/display/redirect?page=changes>

Changes:

[Robert Bradshaw] Allow wildcards for java class lookup transform providers.

[Robert Bradshaw] Disallow getClass().forname().

[Robert Bradshaw] Docs, globbing, and a test.

[Robert Bradshaw] Defer expansion service creation for easier testing.

[Robert Bradshaw] Allow java jar service with classpath.

[noreply] Merge pull request #15785 from Support for passing priority parameter in

[noreply] [BEAM-13283] Skip apache beam dependency installation from


------------------------------------------
[...truncated 350.53 KB...]
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/expansion-service/build/libs/beam-sdks-java-expansion-service-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 18, 2021 6:46:23 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 18, 2021 6:46:24 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 18, 2021 6:46:25 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 252 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 18, 2021 6:46:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 18, 2021 6:46:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 18, 2021 6:46:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 18, 2021 6:46:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 18, 2021 6:46:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 18, 2021 6:46:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 18, 2021 6:46:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@876275357]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 18, 2021 6:46:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 18, 2021 6:46:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 18, 2021 6:46:30 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 18, 2021 6:46:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 18, 2021 6:46:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 18, 2021 6:46:30 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 18, 2021 6:46:30 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@273472569]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 18, 2021 6:46:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 18, 2021 6:46:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 18, 2021 6:46:30 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 18, 2021 6:46:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 18, 2021 6:46:30 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 18, 2021 6:46:30 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 18, 2021 6:46:31 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 18, 2021 6:46:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 18, 2021 6:46:31 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 18, 2021 6:46:35 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 253 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 18, 2021 6:46:35 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 18, 2021 6:46:35 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test151196876845083902.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-nQJCKS0P9r46lJJShzT02NAbPBEDf0XwZg2VqZSEgzg.jar
    Nov 18, 2021 6:46:35 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/join-library/build/libs/beam-sdks-java-extensions-join-library-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-join-library-2.35.0-SNAPSHOT-xz-aSlhizJi8jigGo3amZ2g_MkglE3dBulicsi2UX5U.jar
    Nov 18, 2021 6:46:35 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT-tests-I-osQrG-lSEES6RD4ibuovIeVl-_U3nOEEsBLo1rYrA.jar
    Nov 18, 2021 6:46:36 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/udf/build/libs/beam-sdks-java-extensions-sql-udf-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-udf-2.35.0-SNAPSHOT-MlQQAS1Je1cH4lXtIOKM3zL6o97S5F7sJQ3zUailw08.jar
    Nov 18, 2021 6:46:36 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 4 files newly uploaded in 0 seconds
    Nov 18, 2021 6:46:36 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 18, 2021 6:46:36 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106593 bytes, hash 3de55bbd1101a565e1bf436ba4a83a721cb9c8a109c8e6406ff64d576b1658df> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-PeVbvREBpWXhv0NrpKg6chy5yKEJyOZAb_ZNV2sWWN8.pb
    Nov 18, 2021 6:46:39 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 18, 2021 6:46:39 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 18, 2021 6:46:39 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 18, 2021 6:46:39 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 18, 2021 6:46:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-18_10_46_39-15225712398198291788?project=apache-beam-testing
    Nov 18, 2021 6:46:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-18_10_46_39-15225712398198291788
    Nov 18, 2021 6:46:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-18_10_46_39-15225712398198291788
    Nov 18, 2021 6:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-18T18:46:42.709Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 18, 2021 6:46:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T18:46:48.363Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 18, 2021 6:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T18:46:49.126Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 18, 2021 6:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T18:46:49.164Z: Expanding GroupByKey operations into optimizable parts.
    Nov 18, 2021 6:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T18:46:49.180Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 18, 2021 6:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T18:46:49.244Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 18, 2021 6:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T18:46:49.272Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 18, 2021 6:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T18:46:49.301Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 18, 2021 6:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T18:46:49.626Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 18, 2021 6:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T18:46:49.698Z: Starting 5 workers in us-central1-a...
    Nov 18, 2021 6:47:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T18:47:10.458Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 18, 2021 6:47:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T18:47:35.312Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 18, 2021 6:48:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T18:48:00.110Z: Workers have started successfully.
    Nov 18, 2021 6:48:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T18:48:00.147Z: Workers have started successfully.
    Nov 18, 2021 6:48:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T18:48:26.897Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 18, 2021 6:48:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T18:48:27.043Z: Cleaning up.
    Nov 18, 2021 6:48:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T18:48:27.123Z: Stopping worker pool...
    Nov 18, 2021 6:50:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T18:50:54.176Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 18, 2021 6:50:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T18:50:54.220Z: Worker pool stopped.
    Nov 18, 2021 6:51:00 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-18_10_46_39-15225712398198291788 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c8dad082-d7b3-4d3c-8f8d-e4c502ca9790 and timestamp: 2021-11-18T18:51:01.050000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.718

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 18, 2021 6:51:01 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 42.469 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 41s
153 actionable tasks: 102 executed, 51 from cache

Publishing build scan...
https://gradle.com/s/6bhohrw5ngdhc

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2684

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2684/display/redirect>

Changes:


------------------------------------------
[...truncated 346.71 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) started.
Gradle Test Executor 105 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is d0d57b690416c2db9417137d15f7182c
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 105'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 105'
Successfully started process 'Gradle Test Executor 105'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 18, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 18, 2021 12:45:07 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 18, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 18, 2021 12:45:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 18, 2021 12:45:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 18, 2021 12:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 18, 2021 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 18, 2021 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 18, 2021 12:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 18, 2021 12:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1054139362]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 18, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 18, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 18, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 18, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 18, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 18, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 18, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@970289771]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 18, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 18, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 18, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 18, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 18, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 18, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 18, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 18, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 18, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 18, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 18, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 18, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2176825310437579057.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-5J_xfHXQMPxBKrHdmeFiX9XopUEZQEatQmafosxnM9I.jar
    Nov 18, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 18, 2021 12:45:19 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 18, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106192 bytes, hash 42db5537f794aa21dfb0c02abfc735a0c7d41b65cc2de7efc93643e7e4e32223> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-QttVN_eUqiHfsMAqv8c1oMfUG2XMLefvyTZD5-TjIiM.pb
    Nov 18, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 18, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 18, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 18, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 18, 2021 12:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-18_04_45_21-6764834930054716423?project=apache-beam-testing
    Nov 18, 2021 12:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-18_04_45_21-6764834930054716423
    Nov 18, 2021 12:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-18_04_45_21-6764834930054716423
    Nov 18, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-18T12:45:24.957Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 18, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T12:45:30.626Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 18, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T12:45:31.241Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 18, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T12:45:31.291Z: Expanding GroupByKey operations into optimizable parts.
    Nov 18, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T12:45:31.332Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 18, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T12:45:31.414Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 18, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T12:45:31.459Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 18, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T12:45:31.500Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 18, 2021 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T12:45:31.906Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 18, 2021 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T12:45:32.000Z: Starting 5 workers in us-central1-a...
    Nov 18, 2021 12:45:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T12:45:47.281Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 18, 2021 12:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T12:46:22.875Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 18, 2021 12:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T12:46:49.799Z: Workers have started successfully.
    Nov 18, 2021 12:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T12:46:49.829Z: Workers have started successfully.
    Nov 18, 2021 12:47:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T12:47:18.705Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 18, 2021 12:47:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T12:47:18.876Z: Cleaning up.
    Nov 18, 2021 12:47:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T12:47:18.962Z: Stopping worker pool...
    Nov 18, 2021 12:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T12:49:53.622Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 18, 2021 12:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T12:49:53.690Z: Worker pool stopped.
    Nov 18, 2021 12:50:00 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-18_04_45_21-6764834930054716423 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): ff171ef1-c8a5-44ff-9e2d-01ef9767db55 and timestamp: 2021-11-18T12:50:00.515000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.101

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 18, 2021 12:50:00 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 105 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.001 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.002 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 57.917 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 40s
152 actionable tasks: 102 executed, 50 from cache

Publishing build scan...
https://gradle.com/s/df6gapj2eaoug

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2683

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2683/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13183] [BEAM-8152] Use venv instead of virtualenv to create Python


------------------------------------------
[...truncated 355.64 KB...]
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.
Gradle Test Executor 5 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is d0d57b690416c2db9417137d15f7182c
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 5'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 5'
Successfully started process 'Gradle Test Executor 5'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 18, 2021 6:46:17 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 18, 2021 6:46:17 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 18, 2021 6:46:18 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 18, 2021 6:46:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 18, 2021 6:46:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 18, 2021 6:46:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 18, 2021 6:46:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 18, 2021 6:46:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 18, 2021 6:46:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 18, 2021 6:46:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1535841857]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 18, 2021 6:46:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 18, 2021 6:46:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 18, 2021 6:46:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 18, 2021 6:46:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 18, 2021 6:46:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 18, 2021 6:46:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 18, 2021 6:46:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1690708937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 18, 2021 6:46:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 18, 2021 6:46:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 18, 2021 6:46:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 18, 2021 6:46:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 18, 2021 6:46:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 18, 2021 6:46:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 18, 2021 6:46:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 18, 2021 6:46:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 18, 2021 6:46:24 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 18, 2021 6:46:27 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 18, 2021 6:46:27 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 18, 2021 6:46:27 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT-tests-wq460SWma404IxeH4GpbhmTrsPsdyGkKfzx-yBtHWMs.jar
    Nov 18, 2021 6:46:27 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3250984090105644606.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-UwDewJb6bf8LlpoU3pwv-B86dxUfMnS_8mWoSKqdK94.jar
    Nov 18, 2021 6:46:27 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 250 files cached, 2 files newly uploaded in 0 seconds
    Nov 18, 2021 6:46:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 18, 2021 6:46:27 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106192 bytes, hash cea32211711baddabe5dc3ac163b05e9561ed68983ce7d958cc5ab254eee264a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-zqMiEXEbrdq-XcOsFjsF6VYe1omDzn2VjMWrJU7uJko.pb
    Nov 18, 2021 6:46:30 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 18, 2021 6:46:30 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 18, 2021 6:46:30 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 18, 2021 6:46:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 18, 2021 6:46:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-17_22_46_30-6305211948437166189?project=apache-beam-testing
    Nov 18, 2021 6:46:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-17_22_46_30-6305211948437166189
    Nov 18, 2021 6:46:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-17_22_46_30-6305211948437166189
    Nov 18, 2021 6:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-18T06:46:33.604Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 18, 2021 6:46:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T06:46:40.302Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 18, 2021 6:46:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T06:46:40.920Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 18, 2021 6:46:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T06:46:40.945Z: Expanding GroupByKey operations into optimizable parts.
    Nov 18, 2021 6:46:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T06:46:40.976Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 18, 2021 6:46:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T06:46:41.049Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 18, 2021 6:46:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T06:46:41.077Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 18, 2021 6:46:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T06:46:41.112Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 18, 2021 6:46:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T06:46:41.421Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 18, 2021 6:46:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T06:46:41.497Z: Starting 5 workers in us-central1-a...
    Nov 18, 2021 6:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T06:47:06.654Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 18, 2021 6:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T06:47:25.381Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 18, 2021 6:47:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T06:47:50.882Z: Workers have started successfully.
    Nov 18, 2021 6:47:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T06:47:50.929Z: Workers have started successfully.
    Nov 18, 2021 6:48:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T06:48:15.668Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 18, 2021 6:48:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T06:48:15.815Z: Cleaning up.
    Nov 18, 2021 6:48:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T06:48:15.885Z: Stopping worker pool...
    Nov 18, 2021 6:50:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T06:50:37.691Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 18, 2021 6:50:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T06:50:37.725Z: Worker pool stopped.
    Nov 18, 2021 6:50:44 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-17_22_46_30-6305211948437166189 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 65b33954-540a-4f92-903f-156841c00bf4 and timestamp: 2021-11-18T06:50:44.883000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.085

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 18, 2021 6:50:44 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 5 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 31.381 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 25s
152 actionable tasks: 106 executed, 46 from cache

Publishing build scan...
https://gradle.com/s/jusvntbz3w5qq

Stopped 4 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2682

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2682/display/redirect?page=changes>

Changes:

[Brian Hulette] remove pkg_resources in run_generate_requirements.sh

[Brian Hulette] Run generatePythonRequirementsAll for pyarrow 6

[noreply] Update dev container tags used by Dataflow runner with unreleased SDKs.

[noreply] [BEAM-13265] Add withDeterministicRecordIdFn which allows for


------------------------------------------
[...truncated 344.16 KB...]
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) started.
Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is b6d11026b760080b604f97dbe6424b91
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 18, 2021 12:47:09 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 18, 2021 12:47:10 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 18, 2021 12:47:11 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 18, 2021 12:47:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 18, 2021 12:47:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 18, 2021 12:47:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 18, 2021 12:47:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 18, 2021 12:47:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 18, 2021 12:47:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 18, 2021 12:47:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1054139362]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 18, 2021 12:47:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 18, 2021 12:47:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 18, 2021 12:47:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 18, 2021 12:47:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 18, 2021 12:47:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 18, 2021 12:47:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 18, 2021 12:47:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1291767819]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 18, 2021 12:47:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 18, 2021 12:47:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 18, 2021 12:47:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 18, 2021 12:47:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 18, 2021 12:47:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 18, 2021 12:47:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 18, 2021 12:47:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 18, 2021 12:47:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 18, 2021 12:47:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 18, 2021 12:47:20 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 18, 2021 12:47:20 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 18, 2021 12:47:20 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8548853310358262198.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test--gf53aoVKd5PTaaK5zCEmmQNC5qqLUy-Bcn3PbrVO-0.jar
    Nov 18, 2021 12:47:20 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.beam/beam-vendor-calcite-1_28_0/0.1/1c51cee3b49ce084d1e0140b0344ff9151a61c3/beam-vendor-calcite-1_28_0-0.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/beam-vendor-calcite-1_28_0-0.1-rC8q7_PRvA8jzRG9mkOwFoyPqh5LDK33P7R0J3HxuNY.jar
    Nov 18, 2021 12:47:24 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 250 files cached, 2 files newly uploaded in 3 seconds
    Nov 18, 2021 12:47:24 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 18, 2021 12:47:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106192 bytes, hash 9f4c1cf7350f1072fc68a7ab92ba73ffb7c79827587447c7d50f6da9b0807fa0> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-n0wc9zUPEHL8aKerkrpz_7fHmCdYdEfH1Q9tqbCAf6A.pb
    Nov 18, 2021 12:47:26 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 18, 2021 12:47:26 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 18, 2021 12:47:26 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 18, 2021 12:47:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 18, 2021 12:47:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-17_16_47_26-13999159990309110688?project=apache-beam-testing
    Nov 18, 2021 12:47:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-17_16_47_26-13999159990309110688
    Nov 18, 2021 12:47:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-17_16_47_26-13999159990309110688
    Nov 18, 2021 12:47:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-18T00:47:29.987Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 18, 2021 12:47:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T00:47:40.116Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 18, 2021 12:47:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T00:47:40.902Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 18, 2021 12:47:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T00:47:40.954Z: Expanding GroupByKey operations into optimizable parts.
    Nov 18, 2021 12:47:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T00:47:40.992Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 18, 2021 12:47:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T00:47:41.114Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 18, 2021 12:47:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T00:47:41.160Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 18, 2021 12:47:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T00:47:41.221Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 18, 2021 12:47:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T00:47:41.654Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 18, 2021 12:47:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T00:47:41.742Z: Starting 5 workers in us-central1-c...
    Nov 18, 2021 12:48:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T00:48:04.011Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 18, 2021 12:48:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T00:48:28.160Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 18, 2021 12:48:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T00:48:55.419Z: Workers have started successfully.
    Nov 18, 2021 12:48:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T00:48:55.464Z: Workers have started successfully.
    Nov 18, 2021 12:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T00:49:25.319Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 18, 2021 12:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T00:49:25.504Z: Cleaning up.
    Nov 18, 2021 12:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T00:49:25.598Z: Stopping worker pool...
    Nov 18, 2021 12:51:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T00:51:43.491Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 18, 2021 12:51:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-18T00:51:43.536Z: Worker pool stopped.
    Nov 18, 2021 12:51:50 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-17_16_47_26-13999159990309110688 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 2884d281-ccbc-407f-8ca3-55ec5b323e87 and timestamp: 2021-11-18T00:51:50.319000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.137

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 18, 2021 12:51:50 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 44.083 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 31s
152 actionable tasks: 100 executed, 52 from cache

Publishing build scan...
https://gradle.com/s/tizokbesv3af6

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2681

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2681/display/redirect?page=changes>

Changes:

[samuelw] [BEAM-12818] Write windowed file output to a consistent subdirectory

[samuelw] [BEAM-13268] Change autosharding big query inserts to happen in parallel

[noreply] [BEAM-3293] Add check for error on NewIterable call (#16006)


------------------------------------------
[...truncated 344.45 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 53 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 13d79b7c610a8bc4bb4dab7ddf5d3646
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 53'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 53'
Successfully started process 'Gradle Test Executor 53'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 17, 2021 6:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 17, 2021 6:45:16 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 17, 2021 6:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 17, 2021 6:45:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 17, 2021 6:45:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 17, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 17, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 17, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 17, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 17, 2021 6:45:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1535841857]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 17, 2021 6:45:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 17, 2021 6:45:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 17, 2021 6:45:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 17, 2021 6:45:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 17, 2021 6:45:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 17, 2021 6:45:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 17, 2021 6:45:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1690708937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 17, 2021 6:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 17, 2021 6:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 17, 2021 6:45:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 17, 2021 6:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 17, 2021 6:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 17, 2021 6:45:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 17, 2021 6:45:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 17, 2021 6:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 17, 2021 6:45:23 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 17, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 17, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 17, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7205545304166035260.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-SkWKcKl-x7aRsDo4Sipw4dGWVlxF-SHljOA6pvxSGiw.jar
    Nov 17, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 17, 2021 6:45:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 17, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106192 bytes, hash 714afed2a6686d9fdb8daa333214159c5aecf561cdb343c7c813d0daa3011b62> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-cUr-0qZobZ_bjaozMhQVnFrs9WHNs0PHyBPQ2qMBG2I.pb
    Nov 17, 2021 6:45:29 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 17, 2021 6:45:29 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 17, 2021 6:45:30 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 17, 2021 6:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 17, 2021 6:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-17_10_45_30-7348769477879973926?project=apache-beam-testing
    Nov 17, 2021 6:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-17_10_45_30-7348769477879973926
    Nov 17, 2021 6:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-17_10_45_30-7348769477879973926
    Nov 17, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-17T18:45:33.344Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 17, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T18:45:45.387Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 17, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T18:45:46.163Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 17, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T18:45:46.207Z: Expanding GroupByKey operations into optimizable parts.
    Nov 17, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T18:45:46.246Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 17, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T18:45:46.322Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 17, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T18:45:46.347Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 17, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T18:45:46.379Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 17, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T18:45:46.812Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 17, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T18:45:46.892Z: Starting 5 workers in us-central1-c...
    Nov 17, 2021 6:46:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T18:46:18.147Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 17, 2021 6:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T18:46:31.312Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 17, 2021 6:46:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T18:46:59.441Z: Workers have started successfully.
    Nov 17, 2021 6:46:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T18:46:59.483Z: Workers have started successfully.
    Nov 17, 2021 6:47:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T18:47:27.621Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 17, 2021 6:47:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T18:47:27.843Z: Cleaning up.
    Nov 17, 2021 6:47:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T18:47:27.961Z: Stopping worker pool...
    Nov 17, 2021 6:49:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T18:49:46.758Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 17, 2021 6:49:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T18:49:46.821Z: Worker pool stopped.
    Nov 17, 2021 6:49:53 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-17_10_45_30-7348769477879973926 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c255d45c-fd53-4680-bf36-fa24c398ee5b and timestamp: 2021-11-17T18:49:53.676000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.724

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 17, 2021 6:49:53 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 53 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.004 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.006 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 41.915 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 21s
152 actionable tasks: 101 executed, 51 from cache

Publishing build scan...
https://gradle.com/s/mk6ci5svr7ujo

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2680

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2680/display/redirect>

Changes:


------------------------------------------
[...truncated 338.03 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 29dc949f1e30255a00c5521addfc7a62
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 17, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 17, 2021 12:44:56 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 17, 2021 12:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 17, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 17, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 17, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 17, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 17, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 17, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 17, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1535841857]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1690708937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 17, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 17, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 17, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 17, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 17, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 17, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 17, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2349761798685843152.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-yQZjsF5F7Tu76tp727sz9OO81-0ypF09ql7ZQFvoexw.jar
    Nov 17, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 17, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 17, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106192 bytes, hash 69ec8964781d73e90cbd8cb74f670cc7e08548a16fd310785815275d3c9dc01a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-aeyJZHgdc-kMvYy3T2cMx-CFSKFv0xB4WBUnXTydwBo.pb
    Nov 17, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 17, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 17, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 17, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 17, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-17_04_45_10-17124438145524355372?project=apache-beam-testing
    Nov 17, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-17_04_45_10-17124438145524355372
    Nov 17, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-17_04_45_10-17124438145524355372
    Nov 17, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-17T12:45:15.203Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 17, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T12:45:20.463Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 17, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T12:45:21.469Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 17, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T12:45:21.529Z: Expanding GroupByKey operations into optimizable parts.
    Nov 17, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T12:45:21.582Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 17, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T12:45:21.659Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 17, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T12:45:21.693Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 17, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T12:45:21.717Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 17, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T12:45:22.103Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 17, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T12:45:22.193Z: Starting 5 workers in us-central1-a...
    Nov 17, 2021 12:45:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T12:45:48.651Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 17, 2021 12:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T12:46:13.212Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 17, 2021 12:46:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T12:46:38.846Z: Workers have started successfully.
    Nov 17, 2021 12:46:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T12:46:38.879Z: Workers have started successfully.
    Nov 17, 2021 12:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T12:47:06.655Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 17, 2021 12:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T12:47:06.808Z: Cleaning up.
    Nov 17, 2021 12:47:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T12:47:06.906Z: Stopping worker pool...
    Nov 17, 2021 12:49:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T12:49:34.214Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 17, 2021 12:49:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T12:49:34.260Z: Worker pool stopped.
    Nov 17, 2021 12:49:41 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-17_04_45_10-17124438145524355372 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 2a2ecb79-eeeb-457e-8640-1f01329ada45 and timestamp: 2021-11-17T12:49:41.595000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.384

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 17, 2021 12:49:41 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 49.694 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 23s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/xusb5f2kew2vm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2679

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2679/display/redirect>

Changes:


------------------------------------------
[...truncated 338.99 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 29dc949f1e30255a00c5521addfc7a62
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 17, 2021 6:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 17, 2021 6:45:01 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 17, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 17, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 17, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 17, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 17, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 17, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 17, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 17, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1535841857]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 17, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 17, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 17, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 17, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 17, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 17, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 17, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1690708937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 17, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 17, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 17, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 17, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 17, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 17, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 17, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 17, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 17, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 17, 2021 6:45:11 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 17, 2021 6:45:11 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 17, 2021 6:45:11 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test271569923505833156.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-OC2nBIR3Cc2vV5c7PKRQkk0atlm9thRosy9wghGAIZ4.jar
    Nov 17, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 17, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 17, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106192 bytes, hash 334240f198343f2173929e5b77b03a33a7f896ce008740de63977766f8e972bf> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-M0JA8Zg0PyFzkp5bd7A6M6f4ls4Ah0DeY5d3Zvjpcr8.pb
    Nov 17, 2021 6:45:14 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 17, 2021 6:45:14 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 17, 2021 6:45:15 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 17, 2021 6:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 17, 2021 6:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-16_22_45_15-15739275581178580703?project=apache-beam-testing
    Nov 17, 2021 6:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-16_22_45_15-15739275581178580703
    Nov 17, 2021 6:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-16_22_45_15-15739275581178580703
    Nov 17, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-17T06:45:18.336Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 17, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T06:45:27.278Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 17, 2021 6:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T06:45:28.033Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 17, 2021 6:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T06:45:28.072Z: Expanding GroupByKey operations into optimizable parts.
    Nov 17, 2021 6:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T06:45:28.103Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 17, 2021 6:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T06:45:28.179Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 17, 2021 6:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T06:45:28.231Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 17, 2021 6:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T06:45:28.263Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 17, 2021 6:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T06:45:28.679Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 17, 2021 6:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T06:45:28.759Z: Starting 5 workers in us-central1-a...
    Nov 17, 2021 6:45:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T06:45:37.762Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 17, 2021 6:46:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T06:46:13.141Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 17, 2021 6:46:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T06:46:39.204Z: Workers have started successfully.
    Nov 17, 2021 6:46:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T06:46:39.246Z: Workers have started successfully.
    Nov 17, 2021 6:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T06:47:07.115Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 17, 2021 6:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T06:47:07.321Z: Cleaning up.
    Nov 17, 2021 6:47:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T06:47:07.419Z: Stopping worker pool...
    Nov 17, 2021 6:49:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T06:49:37.040Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 17, 2021 6:49:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T06:49:37.075Z: Worker pool stopped.
    Nov 17, 2021 6:49:46 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-16_22_45_15-15739275581178580703 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c4b3354a-9b17-4179-9be0-02f0525355b2 and timestamp: 2021-11-17T06:49:46.092000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                       7.8

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 17, 2021 6:49:46 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 49.363 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 21s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/jhvp7m2mus4te

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2678

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2678/display/redirect?page=changes>

Changes:

[Pablo Estrada] [BEAM-8033] Throwing an error if a dataSourceProviderFn is defined twice

[minherz] fix: update dlp dependency version range

[noreply] Generate Python container dependencies in an automated way. (#15927)

[noreply] [BEAM-13264] Allow pyarrow up to 6.x (#15995)

[noreply] Update the go get command with v2 (#15945)

[noreply] Go Quickstart: switch to Spark 3 JobServer for better out-of-box

[noreply] Golint fixes for recent Go SDK import (#15999)

[noreply] Merge pull request #15997: [BEAM-8688] Upgrading gcsio library to latest


------------------------------------------
[...truncated 355.62 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':',5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) started.
Gradle Test Executor 6 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 29dc949f1e30255a00c5521addfc7a62
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 6'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 6'
Successfully started process 'Gradle Test Executor 6'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 17, 2021 12:46:33 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 17, 2021 12:46:34 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 17, 2021 12:46:35 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 17, 2021 12:46:37 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 17, 2021 12:46:37 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 17, 2021 12:46:38 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 17, 2021 12:46:39 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 17, 2021 12:46:39 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 17, 2021 12:46:39 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 17, 2021 12:46:39 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1054139362]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 17, 2021 12:46:40 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 17, 2021 12:46:40 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 17, 2021 12:46:40 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 17, 2021 12:46:40 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 17, 2021 12:46:40 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 17, 2021 12:46:40 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 17, 2021 12:46:40 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@970289771]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 17, 2021 12:46:40 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 17, 2021 12:46:40 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 17, 2021 12:46:40 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 17, 2021 12:46:40 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 17, 2021 12:46:40 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 17, 2021 12:46:40 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 17, 2021 12:46:41 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 17, 2021 12:46:41 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 17, 2021 12:46:41 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 17, 2021 12:46:45 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 17, 2021 12:46:45 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 17, 2021 12:46:45 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3589815249011119268.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-l_71Vw8JHWfpcUyx7fZZUDWdW-VcBRP0l2PYdLrnZRw.jar
    Nov 17, 2021 12:46:45 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 17, 2021 12:46:45 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 17, 2021 12:46:45 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106192 bytes, hash cd13d7247b2369456acd8c6fa112a5cee87ef8ddca0f56ae95def596cb98e1b6> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-zRPXJHsjaUVqzYxvoRKlzuh--N3KD1auld71lsuY4bY.pb
    Nov 17, 2021 12:46:48 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 17, 2021 12:46:48 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 17, 2021 12:46:48 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 17, 2021 12:46:48 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 17, 2021 12:46:49 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-16_16_46_48-15944850627078495863?project=apache-beam-testing
    Nov 17, 2021 12:46:49 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-16_16_46_48-15944850627078495863
    Nov 17, 2021 12:46:49 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-16_16_46_48-15944850627078495863
    Nov 17, 2021 12:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-17T00:46:51.608Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 17, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T00:47:00.381Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 17, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T00:47:00.927Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 17, 2021 12:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T00:47:00.979Z: Expanding GroupByKey operations into optimizable parts.
    Nov 17, 2021 12:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T00:47:01.020Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 17, 2021 12:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T00:47:01.133Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 17, 2021 12:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T00:47:01.182Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 17, 2021 12:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T00:47:01.236Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 17, 2021 12:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T00:47:02.088Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 17, 2021 12:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T00:47:02.578Z: Starting 5 workers in us-central1-c...
    Nov 17, 2021 12:47:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T00:47:12.646Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 17, 2021 12:47:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T00:47:47.389Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 17, 2021 12:48:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T00:48:16.910Z: Workers have started successfully.
    Nov 17, 2021 12:48:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T00:48:16.943Z: Workers have started successfully.
    Nov 17, 2021 12:48:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T00:48:46.907Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 17, 2021 12:48:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T00:48:47.182Z: Cleaning up.
    Nov 17, 2021 12:48:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T00:48:47.305Z: Stopping worker pool...
    Nov 17, 2021 12:51:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T00:51:08.598Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 17, 2021 12:51:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-17T00:51:08.653Z: Worker pool stopped.
    Nov 17, 2021 12:51:14 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-16_16_46_48-15944850627078495863 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f1fbab0b-8055-4704-93c4-8b36fb70c4e7 and timestamp: 2021-11-17T00:51:14.562000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.233

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 17, 2021 12:51:14 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 6 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 44.295 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 52s
152 actionable tasks: 108 executed, 44 from cache

Publishing build scan...
https://gradle.com/s/ks7t6v6pl2se2

Stopped 5 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2677

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2677/display/redirect?page=changes>

Changes:

[aydar.zaynutdinov] [BEAM-13178][Playground]

[noreply] [BEAM-3293] Finish E2E implementation of MultiMap side inputs, add

[noreply] Merge pull request #15740 from [BEAM-12936][Playground] Code editor -

[noreply] Merge pull request #15976 from [Playground][BEAM-12941][Bugfix] Fix

[noreply] Merge pull request #15991 [BEAM-13251][Playground] [Bugfix] Lint Fails

[noreply] Merge pull request #15990 from [BEAM-13177][Playground] Change using of

[noreply] [BEAM-13262] Forward for metrics.SingleResult (#15993)


------------------------------------------
[...truncated 341.65 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 962cc34d3f7d6fb7b48821a3f49c14ee
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 16, 2021 6:46:57 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 16, 2021 6:47:00 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 16, 2021 6:47:02 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 16, 2021 6:47:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 16, 2021 6:47:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 16, 2021 6:47:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 16, 2021 6:47:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 16, 2021 6:47:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 16, 2021 6:47:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 16, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@321445148]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 16, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 16, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 16, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 16, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 16, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 16, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 16, 2021 6:47:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@496020169]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 16, 2021 6:47:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 16, 2021 6:47:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 16, 2021 6:47:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 16, 2021 6:47:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 16, 2021 6:47:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 16, 2021 6:47:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 16, 2021 6:47:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 16, 2021 6:47:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 16, 2021 6:47:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 16, 2021 6:47:26 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 16, 2021 6:47:26 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 16, 2021 6:47:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gradle-worker-Po6UxQnvZAnOZPZ17rFSVvi4zLKbQDp9zDU_X6nL54g.jar
    Nov 16, 2021 6:47:27 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6568927196623991160.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-aX3azsLlchyNNzk3ZYMDAdE5Va9eZFbMWdWyJJew0W0.jar
    Nov 16, 2021 6:47:30 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 250 files cached, 2 files newly uploaded in 4 seconds
    Nov 16, 2021 6:47:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 16, 2021 6:47:30 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106180 bytes, hash 3757fb3d1fb41ad0c5116607aef0f9f9dd2cf8993edbbfe1c4a1d206dbb839d1> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-N1f7PR-0GtDFEWYHrvD5-d0s-Jk-27_hxKHSBtu4OdE.pb
    Nov 16, 2021 6:47:34 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 16, 2021 6:47:34 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 16, 2021 6:47:34 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 16, 2021 6:47:34 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 16, 2021 6:47:35 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-16_10_47_34-11258706859495832808?project=apache-beam-testing
    Nov 16, 2021 6:47:35 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-16_10_47_34-11258706859495832808
    Nov 16, 2021 6:47:35 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-16_10_47_34-11258706859495832808
    Nov 16, 2021 6:47:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-16T18:47:38.332Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 16, 2021 6:47:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T18:47:46.153Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 16, 2021 6:47:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T18:47:46.853Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 16, 2021 6:47:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T18:47:46.892Z: Expanding GroupByKey operations into optimizable parts.
    Nov 16, 2021 6:47:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T18:47:46.932Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 16, 2021 6:47:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T18:47:47.007Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 16, 2021 6:47:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T18:47:47.035Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 16, 2021 6:47:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T18:47:47.079Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 16, 2021 6:47:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T18:47:47.443Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 16, 2021 6:47:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T18:47:47.515Z: Starting 5 workers in us-central1-c...
    Nov 16, 2021 6:48:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T18:48:18.354Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 16, 2021 6:48:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T18:48:33.400Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 16, 2021 6:49:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T18:48:59.244Z: Workers have started successfully.
    Nov 16, 2021 6:49:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T18:48:59.275Z: Workers have started successfully.
    Nov 16, 2021 6:49:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T18:49:27.829Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 16, 2021 6:49:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T18:49:28.021Z: Cleaning up.
    Nov 16, 2021 6:49:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T18:49:28.117Z: Stopping worker pool...
    Nov 16, 2021 6:51:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T18:51:46.153Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 16, 2021 6:51:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T18:51:46.248Z: Worker pool stopped.
    Nov 16, 2021 6:51:53 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-16_10_47_34-11258706859495832808 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 2eb996e1-87b7-4335-baa4-2a91852cca47 and timestamp: 2021-11-16T18:51:53.978000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.028

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 16, 2021 6:51:54 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.154 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.139 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 5 mins 17.837 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 20s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/go6kazjgj5llq

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2676

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2676/display/redirect>

Changes:


------------------------------------------
[...truncated 338.49 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 962cc34d3f7d6fb7b48821a3f49c14ee
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 16, 2021 12:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 16, 2021 12:44:53 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 16, 2021 12:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 16, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 16, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 16, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 16, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 16, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 16, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 16, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1310280583]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 16, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 16, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 16, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 16, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 16, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 16, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 16, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1757353858]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 16, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 16, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 16, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 16, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 16, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 16, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 16, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 16, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 16, 2021 12:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 16, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 16, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 16, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test629974698443344380.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-a5k5t9Gp7pQCL3Yr0wZvnY8EI3W7iNLDLJgUpHyaAHs.jar
    Nov 16, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 16, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 16, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106178 bytes, hash e53ad5991f41d00f54b08e37090f9b4ed4fc4ea5b75e7c555cc83cb98fb220e6> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-5TrVmR9B0A9UsI43CQ-bTtT8TqW3XnxVXMg8uY-yIOY.pb
    Nov 16, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 16, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 16, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 16, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 16, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-16_04_45_06-11703444222630290598?project=apache-beam-testing
    Nov 16, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-16_04_45_06-11703444222630290598
    Nov 16, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-16_04_45_06-11703444222630290598
    Nov 16, 2021 12:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-16T12:45:12.081Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T12:45:17.989Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T12:45:18.653Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T12:45:18.760Z: Expanding GroupByKey operations into optimizable parts.
    Nov 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T12:45:18.878Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T12:45:19.042Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T12:45:19.099Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T12:45:19.163Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T12:45:19.493Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T12:45:19.552Z: Starting 5 workers in us-central1-c...
    Nov 16, 2021 12:45:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T12:45:46.949Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 16, 2021 12:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T12:45:59.655Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 16, 2021 12:46:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T12:46:26.021Z: Workers have started successfully.
    Nov 16, 2021 12:46:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T12:46:26.055Z: Workers have started successfully.
    Nov 16, 2021 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T12:46:55.067Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 16, 2021 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T12:46:55.229Z: Cleaning up.
    Nov 16, 2021 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T12:46:55.308Z: Stopping worker pool...
    Nov 16, 2021 12:49:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T12:49:20.935Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 16, 2021 12:49:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T12:49:21.007Z: Worker pool stopped.
    Nov 16, 2021 12:49:27 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-16_04_45_06-11703444222630290598 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 555559d8-1b1c-4a17-80b8-59ef6ad5885a and timestamp: 2021-11-16T12:49:27.607000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.482

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 16, 2021 12:49:27 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 38.402 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 9s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/szdfb54wwbney

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2675

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2675/display/redirect?page=changes>

Changes:

[brachipa] [BEAM-12737] error handling for sql calc error

[brachipa] [BEAM-12737] error handling for sql calc error

[brachipa] [BEAM-12737] error handling for sql calc error

[brachipa] [BEAM-12737] add test

[brachipa] [BEAM-12737] fix build

[brachipa] [BEAM-12737] fix build

[brachipa] revert

[brachipa] [beam-12737] cr fix

[brachipa] [beam-12737] cr fixes

[brachipa] [beam-12737] send errorTransformer to `buildPTransform`

[brachipa] [beam-12737] send errorTransformer to `buildPTransform`

[brachipa] [beam-12737] fix type `? extends POutput`

[brachipa] [beam-12737] add complex test, change error row type

[noreply] [BEAM-13221] Don't drop last data. (#15939)


------------------------------------------
[...truncated 341.49 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 962cc34d3f7d6fb7b48821a3f49c14ee
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 18'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 18'
Successfully started process 'Gradle Test Executor 18'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 16, 2021 6:45:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 16, 2021 6:45:52 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 16, 2021 6:45:53 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 16, 2021 6:45:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 16, 2021 6:45:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 16, 2021 6:45:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 16, 2021 6:45:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 16, 2021 6:45:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 16, 2021 6:45:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 16, 2021 6:45:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1310280583]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 16, 2021 6:45:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 16, 2021 6:45:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 16, 2021 6:45:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 16, 2021 6:45:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 16, 2021 6:45:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 16, 2021 6:45:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 16, 2021 6:45:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1757353858]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:106)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:48)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 16, 2021 6:45:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 16, 2021 6:45:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 16, 2021 6:45:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 16, 2021 6:45:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 16, 2021 6:45:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 16, 2021 6:45:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 16, 2021 6:45:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 16, 2021 6:45:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 16, 2021 6:45:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 16, 2021 6:46:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 16, 2021 6:46:02 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 16, 2021 6:46:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-Im_LlokEkGcOJP4pI4OGjbpuZu662IAWuMfGbTGYeas.jar
    Nov 16, 2021 6:46:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4538163136733320215.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-iKmRXJqoPunqYmDcTga9eEkBOr5CvSMxP_8Kh9C_IwI.jar
    Nov 16, 2021 6:46:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-tests-gN4bkJvUYYy5RCwXc4wAbV3nb5fkoyJUU3rNj5CoXCk.jar
    Nov 16, 2021 6:46:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT-tDWGgdT04k0kcm0eO5adgiF1BfJHG8lQ1W4REGrrw0g.jar
    Nov 16, 2021 6:46:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 4 files newly uploaded in 0 seconds
    Nov 16, 2021 6:46:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 16, 2021 6:46:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106178 bytes, hash d87bbf9471f8846f754106bbce5041470e8b85c0d11dfdce01bcbf9b815b1d58> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-2Hu_lHH4hG91QQa7zlBBRw6LhcDRHf3OAby_m4FbHVg.pb
    Nov 16, 2021 6:46:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 16, 2021 6:46:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 16, 2021 6:46:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 16, 2021 6:46:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 16, 2021 6:46:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-15_22_46_05-13278373343043031341?project=apache-beam-testing
    Nov 16, 2021 6:46:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-15_22_46_05-13278373343043031341
    Nov 16, 2021 6:46:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-15_22_46_05-13278373343043031341
    Nov 16, 2021 6:46:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-16T06:46:08.818Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 16, 2021 6:46:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T06:46:16.131Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 16, 2021 6:46:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T06:46:16.959Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 16, 2021 6:46:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T06:46:17.003Z: Expanding GroupByKey operations into optimizable parts.
    Nov 16, 2021 6:46:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T06:46:17.031Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 16, 2021 6:46:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T06:46:17.100Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 16, 2021 6:46:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T06:46:17.128Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 16, 2021 6:46:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T06:46:17.159Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 16, 2021 6:46:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T06:46:17.491Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 16, 2021 6:46:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T06:46:17.552Z: Starting 5 workers in us-central1-a...
    Nov 16, 2021 6:46:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T06:46:43.894Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 16, 2021 6:47:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T06:47:07.784Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 16, 2021 6:47:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T06:47:35.311Z: Workers have started successfully.
    Nov 16, 2021 6:47:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T06:47:35.341Z: Workers have started successfully.
    Nov 16, 2021 6:48:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T06:48:02.722Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 16, 2021 6:48:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T06:48:02.888Z: Cleaning up.
    Nov 16, 2021 6:48:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T06:48:02.966Z: Stopping worker pool...
    Nov 16, 2021 6:50:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T06:50:26.803Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 16, 2021 6:50:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T06:50:26.857Z: Worker pool stopped.
    Nov 16, 2021 6:50:33 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-15_22_46_05-13278373343043031341 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 2c152436-30ca-4938-8eeb-17a77afcc385 and timestamp: 2021-11-16T06:50:33.926000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.409

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 16, 2021 6:50:34 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 18 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.005 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.004 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 45.267 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 15s
152 actionable tasks: 99 executed, 53 from cache

Publishing build scan...
https://gradle.com/s/oj4ruky4y7rxk

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2674

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2674/display/redirect?page=changes>

Changes:

[Robert Bradshaw] Fully resolve type variables after PTransform type inference.

[Robert Bradshaw] Key-inferable type hints for CombinePerKey.

[Robert Bradshaw] Implement type variable binding for union types.

[Robert Bradshaw] Add comment about wildcard.

[noreply] [BEAM-3293] Add initial MultiMap side input backing type (#15975)

[Robert Bradshaw] Type inference deterministic coder fix for old Python versions.

[Robert Bradshaw] Also implement type matching for union types.

[noreply] [BEAM-13238] Bump all Dataflow Java VR timeouts (#15979)

[noreply] Merge pull request #15978: [BEAM-8691] Upgrading Java deps compatible


------------------------------------------
[...truncated 353.91 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.
Gradle Test Executor 5 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d17f4e29fd5602acb8b6af947d94e03
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 5'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 5'
Successfully started process 'Gradle Test Executor 5'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 16, 2021 12:46:17 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 16, 2021 12:46:18 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 16, 2021 12:46:18 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 16, 2021 12:46:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 16, 2021 12:46:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 16, 2021 12:46:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 16, 2021 12:46:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 16, 2021 12:46:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 16, 2021 12:46:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 16, 2021 12:46:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 16, 2021 12:46:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 16, 2021 12:46:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 16, 2021 12:46:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 16, 2021 12:46:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 16, 2021 12:46:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 16, 2021 12:46:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 16, 2021 12:46:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1312856806]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 16, 2021 12:46:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 16, 2021 12:46:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 16, 2021 12:46:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 16, 2021 12:46:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 16, 2021 12:46:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 16, 2021 12:46:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 16, 2021 12:46:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 16, 2021 12:46:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 16, 2021 12:46:24 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 16, 2021 12:46:27 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 16, 2021 12:46:27 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 16, 2021 12:46:27 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1968734568631861977.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-XCJi6M8DBHyjFkzVTIr1o7NT4vdKDBYGfJRVNYMctiA.jar
    Nov 16, 2021 12:46:27 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 16, 2021 12:46:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 16, 2021 12:46:28 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106180 bytes, hash b180372dc94d24f832ca5752a716adcda54778f9062636d709424ed27d245255> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-sYA3LclNJPgyyldSpxatzaVHePkGJjbXCUJO0n0kUlU.pb
    Nov 16, 2021 12:46:30 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 16, 2021 12:46:30 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 16, 2021 12:46:30 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 16, 2021 12:46:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 16, 2021 12:46:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-15_16_46_30-15322670829978711507?project=apache-beam-testing
    Nov 16, 2021 12:46:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-15_16_46_30-15322670829978711507
    Nov 16, 2021 12:46:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-15_16_46_30-15322670829978711507
    Nov 16, 2021 12:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-16T00:46:33.808Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 16, 2021 12:46:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T00:46:40.123Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 16, 2021 12:46:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T00:46:40.979Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 16, 2021 12:46:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T00:46:41.006Z: Expanding GroupByKey operations into optimizable parts.
    Nov 16, 2021 12:46:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T00:46:41.045Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 16, 2021 12:46:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T00:46:41.140Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 16, 2021 12:46:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T00:46:41.165Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 16, 2021 12:46:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T00:46:41.214Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 16, 2021 12:46:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T00:46:41.573Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 16, 2021 12:46:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T00:46:41.665Z: Starting 5 workers in us-central1-c...
    Nov 16, 2021 12:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T00:47:04.635Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 16, 2021 12:47:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T00:47:20.535Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 16, 2021 12:47:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T00:47:48.002Z: Workers have started successfully.
    Nov 16, 2021 12:47:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T00:47:48.028Z: Workers have started successfully.
    Nov 16, 2021 12:48:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T00:48:16.916Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 16, 2021 12:48:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T00:48:17.060Z: Cleaning up.
    Nov 16, 2021 12:48:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T00:48:17.138Z: Stopping worker pool...
    Nov 16, 2021 12:50:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T00:50:34.530Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 16, 2021 12:50:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-16T00:50:34.581Z: Worker pool stopped.
    Nov 16, 2021 12:50:42 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-15_16_46_30-15322670829978711507 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4a65123c-dd7a-4884-859b-6718c105b6db and timestamp: 2021-11-16T00:50:42.556000000Z:
                     Metric:                    Value:
                   read_time                     7.411
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 16, 2021 12:50:42 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 5 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 28.871 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 24s
152 actionable tasks: 106 executed, 46 from cache

Publishing build scan...
https://gradle.com/s/joe46f6q54wxw

Stopped 4 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2673

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2673/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13226] Batch metrics parsing errors (#15972)

[noreply] [BEAM-13239] pass through io.EOF when decoding row (#15971)

[noreply] [BEAM-5378] Register types & funcs for examples. (#15970)


------------------------------------------
[...truncated 337.83 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3dea7a8c296328f236286871a096d739
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 15, 2021 6:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 15, 2021 6:44:50 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 15, 2021 6:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 15, 2021 6:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 15, 2021 6:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 15, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 15, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 15, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 15, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 15, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@731122468]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 15, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 15, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 15, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 15, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 15, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 15, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 15, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@16967185]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 15, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 15, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 15, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 15, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 15, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 15, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 15, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 15, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 15, 2021 6:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 15, 2021 6:44:59 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 15, 2021 6:44:59 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 15, 2021 6:44:59 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1508724358602384535.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-8Vyka80hhfLgRziIzQrGvrnD1Z5Ze_0jO0tM7LaRxw8.jar
    Nov 15, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 15, 2021 6:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 15, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106178 bytes, hash 678ab1b6b8f7f5920b30ea7cc3287c1150c6da0c1f72b1139537d90e0a34431b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Z4qxtrj39ZILMOp8wyh8EVDG2gwfcrETlTfZDgo0Qxs.pb
    Nov 15, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 15, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 15, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 15, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 15, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-15_10_45_02-3635057345270342018?project=apache-beam-testing
    Nov 15, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-15_10_45_02-3635057345270342018
    Nov 15, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-15_10_45_02-3635057345270342018
    Nov 15, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-15T18:45:06.196Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 15, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T18:45:13.920Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 15, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T18:45:14.603Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 15, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T18:45:14.634Z: Expanding GroupByKey operations into optimizable parts.
    Nov 15, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T18:45:14.661Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 15, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T18:45:14.720Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 15, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T18:45:14.749Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 15, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T18:45:14.773Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 15, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T18:45:15.149Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 15, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T18:45:15.228Z: Starting 5 workers in us-central1-c...
    Nov 15, 2021 6:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T18:45:34.474Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 15, 2021 6:46:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T18:46:00.507Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 15, 2021 6:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T18:46:27.089Z: Workers have started successfully.
    Nov 15, 2021 6:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T18:46:27.188Z: Workers have started successfully.
    Nov 15, 2021 6:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T18:47:04.052Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 15, 2021 6:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T18:47:04.243Z: Cleaning up.
    Nov 15, 2021 6:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T18:47:04.327Z: Stopping worker pool...
    Nov 15, 2021 6:49:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T18:49:26.305Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 15, 2021 6:49:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T18:49:26.343Z: Worker pool stopped.
    Nov 15, 2021 6:49:34 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-15_10_45_02-3635057345270342018 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 7715470b-4f9c-4a77-8277-7914aa3568fc and timestamp: 2021-11-15T18:49:34.854000000Z:
                     Metric:                    Value:
                   read_time                    10.909
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 15, 2021 6:49:34 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 48.217 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 17s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/pdtbwqjosdsmm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2672

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2672/display/redirect>

Changes:


------------------------------------------
[...truncated 337.74 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3dea7a8c296328f236286871a096d739
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 15, 2021 12:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 15, 2021 12:44:50 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 15, 2021 12:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 15, 2021 12:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 15, 2021 12:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 15, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 15, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 15, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 15, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 15, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 15, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 15, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 15, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 15, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 15, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 15, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 15, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1312856806]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 15, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 15, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 15, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 15, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 15, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 15, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 15, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 15, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 15, 2021 12:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 15, 2021 12:44:59 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 15, 2021 12:44:59 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 15, 2021 12:44:59 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8034618853858312706.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ExERla5Xyt9xq1yLWHka167c6PkgTUOLGOP214dp-rU.jar
    Nov 15, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 15, 2021 12:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 15, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106180 bytes, hash abadf786aed5e8bd722c0cb2094d3e4bcda9f7d3852ea3db3dbfa107adbfe950> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-q633hq7V6L1yLAyyCU0-S82p99OFLqPbPb-hB62_6VA.pb
    Nov 15, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 15, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 15, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 15, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 15, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-15_04_45_03-1062640529583498718?project=apache-beam-testing
    Nov 15, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-15_04_45_03-1062640529583498718
    Nov 15, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-15_04_45_03-1062640529583498718
    Nov 15, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-15T12:45:06.896Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 15, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T12:45:14.280Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 15, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T12:45:15.000Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 15, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T12:45:15.054Z: Expanding GroupByKey operations into optimizable parts.
    Nov 15, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T12:45:15.081Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 15, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T12:45:15.151Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 15, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T12:45:15.180Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 15, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T12:45:15.213Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 15, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T12:45:15.744Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 15, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T12:45:15.824Z: Starting 5 workers in us-central1-c...
    Nov 15, 2021 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T12:45:41.663Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 15, 2021 12:46:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T12:46:00.736Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 15, 2021 12:46:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T12:46:28.166Z: Workers have started successfully.
    Nov 15, 2021 12:46:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T12:46:28.197Z: Workers have started successfully.
    Nov 15, 2021 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T12:46:57.685Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 15, 2021 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T12:46:57.855Z: Cleaning up.
    Nov 15, 2021 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T12:46:57.935Z: Stopping worker pool...
    Nov 15, 2021 12:49:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T12:49:22.361Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 15, 2021 12:49:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T12:49:22.413Z: Worker pool stopped.
    Nov 15, 2021 12:49:29 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-15_04_45_03-1062640529583498718 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 29cbb079-bb6e-44de-bdea-07257677ca78 and timestamp: 2021-11-15T12:49:29.744000000Z:
                     Metric:                    Value:
                   read_time                     8.501
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 15, 2021 12:49:29 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 43.269 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 12s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/63qfn6pqhfpjk

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2671

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2671/display/redirect>

Changes:


------------------------------------------
[...truncated 337.92 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3dea7a8c296328f236286871a096d739
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 15, 2021 6:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 15, 2021 6:44:55 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 15, 2021 6:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 15, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 15, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 15, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 15, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 15, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 15, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 15, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 15, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 15, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 15, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 15, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 15, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 15, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 15, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1312856806]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 15, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 15, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 15, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 15, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 15, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 15, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 15, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 15, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 15, 2021 6:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 15, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 15, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 15, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test721459077656191374.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-gpXjqgw1x8I5Ym596KDNnxM_T1rO_KSUvHA0CnQdlKY.jar
    Nov 15, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 15, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 15, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106180 bytes, hash 1bc0ac89e19921ed2c9c3148d00fea66af9bbf4593cdd833282049ab13c0e054> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-G8CsieGZIe0snDFI0A_qZq-bv0WTzdgzKCBJqxPA4FQ.pb
    Nov 15, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 15, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 15, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 15, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 15, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-14_22_45_07-9488448143579715788?project=apache-beam-testing
    Nov 15, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-14_22_45_07-9488448143579715788
    Nov 15, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-14_22_45_07-9488448143579715788
    Nov 15, 2021 6:45:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-15T06:45:11.034Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 15, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T06:45:18.062Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 15, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T06:45:18.641Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 15, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T06:45:18.693Z: Expanding GroupByKey operations into optimizable parts.
    Nov 15, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T06:45:18.728Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 15, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T06:45:18.791Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 15, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T06:45:18.824Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 15, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T06:45:18.858Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 15, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T06:45:19.207Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 15, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T06:45:19.280Z: Starting 5 workers in us-central1-c...
    Nov 15, 2021 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T06:45:24.294Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 15, 2021 6:46:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T06:46:02.757Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 15, 2021 6:46:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T06:46:32.121Z: Workers have started successfully.
    Nov 15, 2021 6:46:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T06:46:32.155Z: Workers have started successfully.
    Nov 15, 2021 6:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T06:47:04.532Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 15, 2021 6:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T06:47:04.751Z: Cleaning up.
    Nov 15, 2021 6:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T06:47:04.819Z: Stopping worker pool...
    Nov 15, 2021 6:49:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T06:49:29.027Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 15, 2021 6:49:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T06:49:29.058Z: Worker pool stopped.
    Nov 15, 2021 6:49:36 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-14_22_45_07-9488448143579715788 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): b1052ffa-02f7-4871-8950-9e19f6d6be08 and timestamp: 2021-11-15T06:49:36.146000000Z:
                     Metric:                    Value:
                   read_time                     8.234
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 15, 2021 6:49:36 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 45.256 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 15s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/3aqrds4yvfb2u

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2670

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2670/display/redirect>

Changes:


------------------------------------------
[...truncated 340.88 KB...]
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3dea7a8c296328f236286871a096d739
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 15, 2021 12:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 15, 2021 12:44:56 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 15, 2021 12:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 15, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 15, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 15, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 15, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 15, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 15, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 15, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 15, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 15, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 15, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 15, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 15, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 15, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 15, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1312856806]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 15, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 15, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 15, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 15, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 15, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 15, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 15, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 15, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 15, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 15, 2021 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 15, 2021 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 15, 2021 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5116554833836520268.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-09sGGXOzXPM7FNUs8mxU-Ze_C0Dqrn-WwLBFclTmO8M.jar
    Nov 15, 2021 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.rnorth.duct-tape/duct-tape/1.0.8/92edc22a9ab2f3e17c9bf700aaee377d50e8b530/duct-tape-1.0.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/duct-tape-1.0.8-Mc7xLd7JedH4bXz3CMQaF9pSPQXGhf1mQunQsq3bckA.jar
    Nov 15, 2021 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.rnorth.visible-assertions/visible-assertions/2.1.2/20d31a578030ec8e941888537267d3123c2ad1c1/visible-assertions-2.1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/visible-assertions-2.1.2-RQSulosjfNzcto_1sHqmOr5JkvkHp3w9YSCqm5BBQBw.jar
    Nov 15, 2021 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.java.dev.jna/jna-platform/4.0.0/deb6bf66918989b50209b8c9aaf3b2561af7f011/jna-platform-4.0.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jna-platform-4.0.0-B21i7Yfna9yzdQ_-gKpJXuIRK9chJmOVTWVH9IJEkuk.jar
    Nov 15, 2021 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.java.dev.jna/jna/5.5.0/e0845217c4907822403912ad6828d8e0b256208/jna-5.5.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jna-5.5.0-swj66_5O1AnehBDgpjLRZLISawNfbqz_lo05CMr7TZ4.jar
    Nov 15, 2021 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 5 files newly uploaded in 0 seconds
    Nov 15, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 15, 2021 12:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106180 bytes, hash 18f1c655481bc8d5225333506d81b49b50e19100d7afdca3ed9ec080fc659ddc> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-GPHGVUgbyNUiUzNQbYG0m1DhkQDXr9yj7Z7AgPxlndw.pb
    Nov 15, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 15, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 15, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 15, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 15, 2021 12:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-14_16_45_10-16917615556460680788?project=apache-beam-testing
    Nov 15, 2021 12:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-14_16_45_10-16917615556460680788
    Nov 15, 2021 12:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-14_16_45_10-16917615556460680788
    Nov 15, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-15T00:45:13.861Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 15, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T00:45:21.193Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 15, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T00:45:21.823Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 15, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T00:45:21.857Z: Expanding GroupByKey operations into optimizable parts.
    Nov 15, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T00:45:21.885Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 15, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T00:45:21.948Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 15, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T00:45:21.982Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 15, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T00:45:22.016Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 15, 2021 12:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T00:45:22.412Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 15, 2021 12:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T00:45:22.491Z: Starting 5 workers in us-central1-c...
    Nov 15, 2021 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T00:45:25.627Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 15, 2021 12:46:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T00:46:05.918Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 15, 2021 12:46:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T00:46:33.186Z: Workers have started successfully.
    Nov 15, 2021 12:46:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T00:46:33.213Z: Workers have started successfully.
    Nov 15, 2021 12:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T00:47:02.114Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 15, 2021 12:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T00:47:02.289Z: Cleaning up.
    Nov 15, 2021 12:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T00:47:02.396Z: Stopping worker pool...
    Nov 15, 2021 12:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T00:49:22.073Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 15, 2021 12:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-15T00:49:22.121Z: Worker pool stopped.
    Nov 15, 2021 12:49:27 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-14_16_45_10-16917615556460680788 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4bbe3212-926d-40a2-91cf-952bceb477f2 and timestamp: 2021-11-15T00:49:27.305000000Z:
                     Metric:                    Value:
                   read_time                     8.698
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 15, 2021 12:49:27 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 35.133 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 4s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/x4s3itjo27hbe

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2669

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2669/display/redirect>

Changes:


------------------------------------------
[...truncated 338.27 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3dea7a8c296328f236286871a096d739
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 14, 2021 6:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 14, 2021 6:44:51 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 14, 2021 6:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 14, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 14, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 14, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 14, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 14, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 14, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 14, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@731122468]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 14, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 14, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 14, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 14, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 14, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 14, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 14, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@16967185]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 14, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 14, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 14, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 14, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 14, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 14, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 14, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 14, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 14, 2021 6:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 14, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 14, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 14, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7109182198164097090.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-N6Y_sfEy75IHKbakPeLdJWARpid2_gSNN3U207gh3Ms.jar
    Nov 14, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 14, 2021 6:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 14, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106178 bytes, hash f4b0a715290c056912bdff91692f721ed2849eac2584882306cd5d2e6ffb3afc> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-9LCnFSkMBWkSvf-RaS9yHtKEnqwlhIgjBs1dLm_7Ovw.pb
    Nov 14, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 14, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 14, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 14, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 14, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-14_10_45_03-299925038793648697?project=apache-beam-testing
    Nov 14, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-14_10_45_03-299925038793648697
    Nov 14, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-14_10_45_03-299925038793648697
    Nov 14, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-14T18:45:07.194Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 14, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T18:45:13.496Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 14, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T18:45:14.012Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 14, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T18:45:14.052Z: Expanding GroupByKey operations into optimizable parts.
    Nov 14, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T18:45:14.077Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 14, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T18:45:14.142Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 14, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T18:45:14.188Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 14, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T18:45:14.210Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 14, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T18:45:14.540Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 14, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T18:45:14.622Z: Starting 5 workers in us-central1-c...
    Nov 14, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T18:45:24.255Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 14, 2021 6:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T18:45:58.626Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 14, 2021 6:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T18:46:24.595Z: Workers have started successfully.
    Nov 14, 2021 6:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T18:46:24.646Z: Workers have started successfully.
    Nov 14, 2021 6:46:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T18:46:54.003Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 14, 2021 6:46:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T18:46:54.160Z: Cleaning up.
    Nov 14, 2021 6:46:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T18:46:54.246Z: Stopping worker pool...
    Nov 14, 2021 6:49:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T18:49:09.165Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 14, 2021 6:49:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T18:49:09.211Z: Worker pool stopped.
    Nov 14, 2021 6:49:16 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-14_10_45_03-299925038793648697 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 62805cd8-8d94-4988-b201-dcdd3f3e3837 and timestamp: 2021-11-14T18:49:16.083000000Z:
                     Metric:                    Value:
                   read_time                     8.853
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 14, 2021 6:49:16 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 28.665 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 4m 57s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/rfgbdagdyuhva

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2668

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2668/display/redirect>

Changes:


------------------------------------------
[...truncated 338.32 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3dea7a8c296328f236286871a096d739
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 14, 2021 12:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 14, 2021 12:44:52 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 14, 2021 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 14, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 14, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 14, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 14, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 14, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 14, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 14, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@731122468]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 14, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 14, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 14, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 14, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 14, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 14, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 14, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@16967185]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 14, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 14, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 14, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 14, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 14, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 14, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 14, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 14, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 14, 2021 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 14, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 14, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 14, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test400973402988823031.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-FPx5RrZoPDHy9B8GCPHwjRhPHbl7eH_f_ON_Qfm5a4A.jar
    Nov 14, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 1 seconds
    Nov 14, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 14, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106178 bytes, hash 989e9b1a230b8c825879b6837977de36b21b2fc453a0b4b679a862b36cd6bd70> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-mJ6bGiMLjIJYebaDeXfeNrIbL8RToLS2eahis2zWvXA.pb
    Nov 14, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 14, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 14, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 14, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 14, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-14_04_45_06-7180477496659827337?project=apache-beam-testing
    Nov 14, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-14_04_45_06-7180477496659827337
    Nov 14, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-14_04_45_06-7180477496659827337
    Nov 14, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-14T12:45:09.681Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 14, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T12:45:16.839Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 14, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T12:45:17.509Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 14, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T12:45:17.541Z: Expanding GroupByKey operations into optimizable parts.
    Nov 14, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T12:45:17.574Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 14, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T12:45:17.646Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 14, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T12:45:17.673Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 14, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T12:45:17.707Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 14, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T12:45:18.133Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 14, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T12:45:18.218Z: Starting 5 workers in us-central1-c...
    Nov 14, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T12:45:22.381Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 14, 2021 12:45:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T12:45:57.660Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 14, 2021 12:46:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T12:46:25.602Z: Workers have started successfully.
    Nov 14, 2021 12:46:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T12:46:25.628Z: Workers have started successfully.
    Nov 14, 2021 12:46:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T12:46:54.131Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 14, 2021 12:46:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T12:46:54.272Z: Cleaning up.
    Nov 14, 2021 12:46:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T12:46:54.329Z: Stopping worker pool...
    Nov 14, 2021 12:49:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T12:49:13.798Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 14, 2021 12:49:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T12:49:13.837Z: Worker pool stopped.
    Nov 14, 2021 12:49:21 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-14_04_45_06-7180477496659827337 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): e057d8f7-a8bc-4eb3-9e27-754f9806adb1 and timestamp: 2021-11-14T12:49:21.520000000Z:
                     Metric:                    Value:
                   read_time                       7.2
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 14, 2021 12:49:21 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 33.367 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 2s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/vyrkvcoxsuuuo

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2667

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2667/display/redirect>

Changes:


------------------------------------------
[...truncated 338.17 KB...]
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3dea7a8c296328f236286871a096d739
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 14, 2021 6:44:49 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 14, 2021 6:44:50 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 14, 2021 6:44:50 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 14, 2021 6:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 14, 2021 6:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 14, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 14, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 14, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 14, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 14, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@731122468]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 14, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 14, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 14, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 14, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 14, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 14, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 14, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@16967185]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 14, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 14, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 14, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 14, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 14, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 14, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 14, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 14, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 14, 2021 6:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 14, 2021 6:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 14, 2021 6:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 14, 2021 6:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6352450687409663827.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-vTUoPWbtqy00KNJqU9kT31frpmSoNeM9FHH-q-jogpk.jar
    Nov 14, 2021 6:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.alibaba/fastjson/1.2.69/6cb063f1d527ff65bdbb9ea74888a5ffc3f92197/fastjson-1.2.69.jar to gs://temp-storage-for-perf-tests/loadtests/staging/fastjson-1.2.69-KniRdEoDeOAJmpuB3b7U0uGn6GMvb4b4_8-jg-UeScg.jar
    Nov 14, 2021 6:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 250 files cached, 2 files newly uploaded in 0 seconds
    Nov 14, 2021 6:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 14, 2021 6:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106178 bytes, hash 431dbc3456743e0058734f059f32bd04df80c4b4048da39fb75d20bcc22509c5> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Qx28NFZ0PgBYc08FnzK9BN-AxLQEjaOft10gvMIlCcU.pb
    Nov 14, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 14, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 14, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 14, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 14, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-13_22_45_02-5218425844993952169?project=apache-beam-testing
    Nov 14, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-13_22_45_02-5218425844993952169
    Nov 14, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-13_22_45_02-5218425844993952169
    Nov 14, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-14T06:45:05.774Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 14, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T06:45:12.660Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 14, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T06:45:13.418Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 14, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T06:45:13.466Z: Expanding GroupByKey operations into optimizable parts.
    Nov 14, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T06:45:13.493Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 14, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T06:45:13.550Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 14, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T06:45:13.579Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 14, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T06:45:13.603Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 14, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T06:45:13.906Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 14, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T06:45:13.971Z: Starting 5 workers in us-central1-c...
    Nov 14, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T06:45:20.312Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 14, 2021 6:45:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T06:45:55.281Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 14, 2021 6:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T06:46:23.218Z: Workers have started successfully.
    Nov 14, 2021 6:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T06:46:23.246Z: Workers have started successfully.
    Nov 14, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T06:46:53.130Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 14, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T06:46:53.266Z: Cleaning up.
    Nov 14, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T06:46:53.376Z: Stopping worker pool...
    Nov 14, 2021 6:49:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T06:49:19.963Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 14, 2021 6:49:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T06:49:20.000Z: Worker pool stopped.
    Nov 14, 2021 6:49:28 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-13_22_45_02-5218425844993952169 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 18650a66-4d30-43a1-9b78-d75b8ae1c87d and timestamp: 2021-11-14T06:49:28.740000000Z:
                     Metric:                    Value:
                   read_time                      8.66
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 14, 2021 6:49:28 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 42.57 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 10s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ijlxumgzksp3a

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2666

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2666/display/redirect>

Changes:


------------------------------------------
[...truncated 338.31 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':',5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3dea7a8c296328f236286871a096d739
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 14, 2021 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 14, 2021 12:44:52 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 14, 2021 12:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 14, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 14, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 14, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 14, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 14, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 14, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 14, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 14, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 14, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 14, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 14, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 14, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 14, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 14, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1312856806]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 14, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 14, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 14, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 14, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 14, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 14, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 14, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 14, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 14, 2021 12:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 14, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 14, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 14, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7172933593072508187.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-gwMPfHCvQ6KpHDUvY-X1hnaNNuIdr5kcBPebywocpDA.jar
    Nov 14, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 14, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 14, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106180 bytes, hash 6d238ce97e7ec44db19556ad8eae62ead344a126aea0a8d57d4109df6993487c> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-bSOM6X5-xE2xlVatjq5i6tNEoSauoKjVfUEJ32mTSHw.pb
    Nov 14, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 14, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 14, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 14, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 14, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-13_16_45_05-8040092333831562551?project=apache-beam-testing
    Nov 14, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-13_16_45_05-8040092333831562551
    Nov 14, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-13_16_45_05-8040092333831562551
    Nov 14, 2021 12:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-14T00:45:08.249Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 14, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T00:45:13.640Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 14, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T00:45:14.448Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 14, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T00:45:14.473Z: Expanding GroupByKey operations into optimizable parts.
    Nov 14, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T00:45:14.496Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 14, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T00:45:14.561Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 14, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T00:45:14.584Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 14, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T00:45:14.600Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 14, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T00:45:14.947Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 14, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T00:45:15.011Z: Starting 5 workers in us-central1-a...
    Nov 14, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T00:45:35.225Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 14, 2021 12:46:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T00:46:00.277Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 14, 2021 12:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T00:46:26.104Z: Workers have started successfully.
    Nov 14, 2021 12:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T00:46:26.147Z: Workers have started successfully.
    Nov 14, 2021 12:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T00:46:54.532Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 14, 2021 12:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T00:46:54.644Z: Cleaning up.
    Nov 14, 2021 12:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T00:46:54.703Z: Stopping worker pool...
    Nov 14, 2021 12:49:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T00:49:16.832Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 14, 2021 12:49:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-14T00:49:16.864Z: Worker pool stopped.
    Nov 14, 2021 12:49:23 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-13_16_45_05-8040092333831562551 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 79aaa516-7029-4d46-9d24-bc4bd778a145 and timestamp: 2021-11-14T00:49:23.817000000Z:
                     Metric:                    Value:
                   read_time                     8.189
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 14, 2021 12:49:23 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 35.787 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 4s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/yveeo7mphachg

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2665

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2665/display/redirect>

Changes:


------------------------------------------
[...truncated 338.10 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3dea7a8c296328f236286871a096d739
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 13, 2021 6:44:49 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 13, 2021 6:44:50 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 13, 2021 6:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 13, 2021 6:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 13, 2021 6:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 13, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 13, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 13, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 13, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 13, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 13, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 13, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 13, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 13, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 13, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 13, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 13, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1312856806]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 13, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 13, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 13, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 13, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 13, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 13, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 13, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 13, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 13, 2021 6:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 13, 2021 6:44:59 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 13, 2021 6:44:59 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 13, 2021 6:44:59 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2737725022020575136.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-hStCCqxCMIFcg0rhW6rszHzp5OwWl6odY3sr_TkrO30.jar
    Nov 13, 2021 6:44:59 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 13, 2021 6:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 13, 2021 6:44:59 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106181 bytes, hash 9430bf4fedb6df6a1e69b89f3a95772701b5bac6db56d331ed4b27d973cb4af2> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-lDC_T-2232oeabifOpV3JwG1usbbVtMx7Usn2XPLSvI.pb
    Nov 13, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 13, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 13, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 13, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 13, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-13_10_45_02-5058727353245118304?project=apache-beam-testing
    Nov 13, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-13_10_45_02-5058727353245118304
    Nov 13, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-13_10_45_02-5058727353245118304
    Nov 13, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-13T18:45:05.780Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 13, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T18:45:12.562Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 13, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T18:45:13.090Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 13, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T18:45:13.128Z: Expanding GroupByKey operations into optimizable parts.
    Nov 13, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T18:45:13.177Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 13, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T18:45:13.266Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 13, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T18:45:13.301Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 13, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T18:45:13.337Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 13, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T18:45:13.706Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 13, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T18:45:13.780Z: Starting 5 workers in us-central1-a...
    Nov 13, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T18:45:24.855Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 13, 2021 6:46:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T18:46:00.402Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 13, 2021 6:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T18:46:27.008Z: Workers have started successfully.
    Nov 13, 2021 6:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T18:46:27.045Z: Workers have started successfully.
    Nov 13, 2021 6:46:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T18:46:54.459Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 13, 2021 6:46:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T18:46:54.584Z: Cleaning up.
    Nov 13, 2021 6:46:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T18:46:54.659Z: Stopping worker pool...
    Nov 13, 2021 6:49:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T18:49:16.023Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 13, 2021 6:49:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T18:49:16.066Z: Worker pool stopped.
    Nov 13, 2021 6:49:21 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-13_10_45_02-5058727353245118304 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 988fddbd-4e12-4cbc-b784-70c6f07cf39a and timestamp: 2021-11-13T18:49:21.235000000Z:
                     Metric:                    Value:
                   read_time                     7.673
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 13, 2021 6:49:21 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 35.319 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 4s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/u76ya2gwfagvi

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2664

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2664/display/redirect>

Changes:


------------------------------------------
[...truncated 338.91 KB...]
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3dea7a8c296328f236286871a096d739
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 13, 2021 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 13, 2021 12:44:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 13, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 13, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 13, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 13, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 13, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 13, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 13, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 13, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@731122468]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 13, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 13, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 13, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 13, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 13, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 13, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 13, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1266073646]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 13, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 13, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 13, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 13, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 13, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 13, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 13, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 13, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 13, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 13, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 13, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 13, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8051265730850594719.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-SRqBXIAio4zBgclpapVjSjWkBYSxxQ2P1mL-jnhaVB8.jar
    Nov 13, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 13, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 13, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106178 bytes, hash ec50d67270b92d3700d776d39a31c84a534f90709d3567ca2b00ab2bf8b446e3> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-7FDWcnC5LTcA13bTmjHISlNPkHCdNWfKKwCrK_i0RuM.pb
    Nov 13, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 13, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 13, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 13, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 13, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-13_04_45_07-5339616109480515260?project=apache-beam-testing
    Nov 13, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-13_04_45_07-5339616109480515260
    Nov 13, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-13_04_45_07-5339616109480515260
    Nov 13, 2021 12:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-13T12:45:10.734Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 13, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T12:45:18.615Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 13, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T12:45:19.396Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 13, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T12:45:19.438Z: Expanding GroupByKey operations into optimizable parts.
    Nov 13, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T12:45:19.476Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 13, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T12:45:19.542Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 13, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T12:45:19.577Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 13, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T12:45:19.620Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 13, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T12:45:19.993Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 13, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T12:45:20.073Z: Starting 5 workers in us-central1-c...
    Nov 13, 2021 12:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T12:45:47.871Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 13, 2021 12:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T12:46:00.241Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 13, 2021 12:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T12:46:28.005Z: Workers have started successfully.
    Nov 13, 2021 12:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T12:46:28.043Z: Workers have started successfully.
    Nov 13, 2021 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T12:46:56.469Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 13, 2021 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T12:46:56.765Z: Cleaning up.
    Nov 13, 2021 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T12:46:56.852Z: Stopping worker pool...
    Nov 13, 2021 12:49:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T12:49:21.621Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 13, 2021 12:49:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T12:49:21.856Z: Worker pool stopped.
    Nov 13, 2021 12:49:29 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-13_04_45_07-5339616109480515260 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d92517d6-f8fc-4cbf-8e7e-bcbd4f37b493 and timestamp: 2021-11-13T12:49:29.485000000Z:
                     Metric:                    Value:
                   read_time                     7.459
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 13, 2021 12:49:29 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 41.754 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 11s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/chtijc72ls6mq

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2663

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2663/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13223] Documents two new features of cross-language transforms


------------------------------------------
[...truncated 337.80 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3dea7a8c296328f236286871a096d739
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 13, 2021 6:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 13, 2021 6:44:54 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 13, 2021 6:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 13, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 13, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 13, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 13, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 13, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 13, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 13, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 13, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 13, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 13, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 13, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 13, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 13, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 13, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@177135030]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 13, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 13, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 13, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 13, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 13, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 13, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 13, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 13, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 13, 2021 6:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 13, 2021 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 13, 2021 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 13, 2021 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4534065064533024888.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-H7rd9xGlNxFAcdDpLD2ubCmUY1VUnf3pJ1hqUtFn4jE.jar
    Nov 13, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 13, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 13, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106180 bytes, hash 51437fae2dfe928f0c055694bee001125da34e55738db327e6fd99e128c6fe32> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-UUN_ri3-ko8MBVaUvuABEl2jTlVzjbMn5v2Z4SjG_jI.pb
    Nov 13, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 13, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 13, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 13, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 13, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-12_22_45_07-5509006835222482628?project=apache-beam-testing
    Nov 13, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-12_22_45_07-5509006835222482628
    Nov 13, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-12_22_45_07-5509006835222482628
    Nov 13, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-13T06:45:10.535Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 13, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T06:45:18.427Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 13, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T06:45:19.087Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 13, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T06:45:19.117Z: Expanding GroupByKey operations into optimizable parts.
    Nov 13, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T06:45:19.148Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 13, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T06:45:19.220Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 13, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T06:45:19.252Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 13, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T06:45:19.286Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 13, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T06:45:19.605Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 13, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T06:45:19.680Z: Starting 5 workers in us-central1-c...
    Nov 13, 2021 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T06:45:41.941Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 13, 2021 6:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T06:46:00.621Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 13, 2021 6:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T06:46:27.828Z: Workers have started successfully.
    Nov 13, 2021 6:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T06:46:27.868Z: Workers have started successfully.
    Nov 13, 2021 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T06:46:57.952Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 13, 2021 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T06:46:58.146Z: Cleaning up.
    Nov 13, 2021 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T06:46:58.209Z: Stopping worker pool...
    Nov 13, 2021 6:49:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T06:49:20.282Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 13, 2021 6:49:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T06:49:20.326Z: Worker pool stopped.
    Nov 13, 2021 6:49:27 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-12_22_45_07-5509006835222482628 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 39f54ba0-d5cd-4dcb-9922-71182833f6b4 and timestamp: 2021-11-13T06:49:28.021000000Z:
                     Metric:                    Value:
                   read_time                     6.969
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 13, 2021 6:49:28 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 37.997 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 8s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ydcedj4jtrf46

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2662

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2662/display/redirect?page=changes>

Changes:

[aydar.zaynutdinov] [BEAM-13224][Playground]

[noreply] [BEAM-13193] Add elements field in ProcessBundleRequest and ProcessBu…

[noreply] [BEAM-3304] Refactor trigger API (#15952)

[noreply] [BEAM-3293] Add NewKeyedIterable function to SideInputAdapter interface


------------------------------------------
[...truncated 349.46 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 4 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3dea7a8c296328f236286871a096d739
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 4'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 4'
Successfully started process 'Gradle Test Executor 4'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 13, 2021 12:46:10 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 13, 2021 12:46:11 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 13, 2021 12:46:11 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 13, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 13, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 13, 2021 12:46:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 13, 2021 12:46:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 13, 2021 12:46:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 13, 2021 12:46:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 13, 2021 12:46:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 13, 2021 12:46:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 13, 2021 12:46:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 13, 2021 12:46:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 13, 2021 12:46:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 13, 2021 12:46:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 13, 2021 12:46:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 13, 2021 12:46:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1312856806]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 13, 2021 12:46:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 13, 2021 12:46:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 13, 2021 12:46:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 13, 2021 12:46:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 13, 2021 12:46:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 13, 2021 12:46:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 13, 2021 12:46:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 13, 2021 12:46:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 13, 2021 12:46:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 13, 2021 12:46:20 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 13, 2021 12:46:20 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-CxDB2MFbq9vTw9dZ8Zb10JhAyMvZWoQKUaKm85AO368.jar
    Nov 13, 2021 12:46:20 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6484219634203589726.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-uqgpHNMeKTW_IQrj6kBO06cpf3T8h9CT3VXlB724KHw.jar
    Nov 13, 2021 12:46:20 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 13, 2021 12:46:20 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 13, 2021 12:46:20 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106181 bytes, hash 0982637497525ba9278c4230f56f605499daad667943240e288017d82b568c04> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-CYJjdJdSW6knjEIw9W9gVJnarWZ5QyQOKIAX2CtWjAQ.pb
    Nov 13, 2021 12:46:23 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 13, 2021 12:46:23 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 13, 2021 12:46:23 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 13, 2021 12:46:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 13, 2021 12:46:24 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-12_16_46_23-1687586334239900762?project=apache-beam-testing
    Nov 13, 2021 12:46:24 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-12_16_46_23-1687586334239900762
    Nov 13, 2021 12:46:24 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-12_16_46_23-1687586334239900762
    Nov 13, 2021 12:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-13T00:46:26.538Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 13, 2021 12:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T00:46:34.927Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 13, 2021 12:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T00:46:35.595Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 13, 2021 12:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T00:46:35.664Z: Expanding GroupByKey operations into optimizable parts.
    Nov 13, 2021 12:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T00:46:35.701Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 13, 2021 12:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T00:46:35.844Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 13, 2021 12:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T00:46:35.915Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 13, 2021 12:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T00:46:35.949Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 13, 2021 12:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T00:46:36.452Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 13, 2021 12:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T00:46:36.566Z: Starting 5 workers in us-central1-c...
    Nov 13, 2021 12:46:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T00:46:53.847Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 13, 2021 12:47:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T00:47:20.260Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 13, 2021 12:47:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T00:47:48.258Z: Workers have started successfully.
    Nov 13, 2021 12:47:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T00:47:48.329Z: Workers have started successfully.
    Nov 13, 2021 12:48:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T00:48:15.512Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 13, 2021 12:48:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T00:48:15.822Z: Cleaning up.
    Nov 13, 2021 12:48:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T00:48:16.072Z: Stopping worker pool...
    Nov 13, 2021 12:50:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T00:50:35.074Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 13, 2021 12:50:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-13T00:50:35.178Z: Worker pool stopped.
    Nov 13, 2021 12:50:42 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-12_16_46_23-1687586334239900762 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d02e5cb6-642a-4f77-b2a4-9f2fe80d3dc6 and timestamp: 2021-11-13T00:50:42.622000000Z:
                     Metric:                    Value:
                   read_time                     8.762
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 13, 2021 12:50:42 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 4 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 35.66 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 22s
152 actionable tasks: 104 executed, 48 from cache

Publishing build scan...
https://gradle.com/s/m5fd3juldfwr6

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2661

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2661/display/redirect?page=changes>

Changes:

[mmack] [BEAM-13009] Bugfix: Prevent dataloss in DynamoDBIO write if

[mmack] [BEAM-13009] Bugfix: Prevent dataloss in DynamoDBIO write if

[Alexey Romanenko] [BEAM-4868] Bump com.amazonaws to 1.12.106

[noreply] [BEAM-13147] Fix nullability issues in aws (SDK v1 + v2) for AwsModule

[noreply] [BEAM-3293] Add encoding of map/multimap side inputs into proto (#15960)

[noreply] [BEAM-13052] Implement ProtoPlusCoder and add it to the default options


------------------------------------------
[...truncated 357.16 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.
Gradle Test Executor 5 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 4bdd9d4686294f5a152ac2aabe32864a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 5'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 5'
Successfully started process 'Gradle Test Executor 5'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 12, 2021 6:46:20 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 12, 2021 6:46:20 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 12, 2021 6:46:21 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 12, 2021 6:46:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 12, 2021 6:46:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 12, 2021 6:46:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 12, 2021 6:46:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 12, 2021 6:46:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 12, 2021 6:46:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 12, 2021 6:46:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@731122468]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 12, 2021 6:46:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 12, 2021 6:46:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 12, 2021 6:46:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 12, 2021 6:46:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 12, 2021 6:46:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 12, 2021 6:46:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 12, 2021 6:46:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1266073646]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 12, 2021 6:46:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 12, 2021 6:46:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 12, 2021 6:46:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 12, 2021 6:46:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 12, 2021 6:46:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 12, 2021 6:46:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 12, 2021 6:46:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 12, 2021 6:46:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 12, 2021 6:46:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 12, 2021 6:46:30 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 12, 2021 6:46:30 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 12, 2021 6:46:30 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test9112192014326570001.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-0m0LNlC99DK-MuhzKSogXYXpv7Pi6EvXmE-XVfL-bHg.jar
    Nov 12, 2021 6:46:31 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 12, 2021 6:46:31 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 12, 2021 6:46:31 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106178 bytes, hash 3db583088a751c62cd18d64b1cb0d44003d8b9972a1c2849322297ac66d730eb> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-PbWDCIp1HGLNGNZLHLDUQAPYuZcqHChJMiKXrGbXMOs.pb
    Nov 12, 2021 6:46:33 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 12, 2021 6:46:33 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 12, 2021 6:46:33 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 12, 2021 6:46:33 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 12, 2021 6:46:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-12_10_46_34-10497604061900194690?project=apache-beam-testing
    Nov 12, 2021 6:46:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-12_10_46_34-10497604061900194690
    Nov 12, 2021 6:46:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-12_10_46_34-10497604061900194690
    Nov 12, 2021 6:46:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-12T18:46:39.657Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 12, 2021 6:46:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T18:46:48.032Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 12, 2021 6:46:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T18:46:48.606Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 12, 2021 6:46:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T18:46:48.759Z: Expanding GroupByKey operations into optimizable parts.
    Nov 12, 2021 6:46:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T18:46:48.827Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 12, 2021 6:46:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T18:46:48.900Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 12, 2021 6:46:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T18:46:48.930Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 12, 2021 6:46:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T18:46:48.969Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 12, 2021 6:46:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T18:46:49.433Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 12, 2021 6:46:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T18:46:49.566Z: Starting 5 workers in us-central1-c...
    Nov 12, 2021 6:47:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T18:47:17.994Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 12, 2021 6:47:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T18:47:33.661Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 12, 2021 6:48:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T18:48:00.431Z: Workers have started successfully.
    Nov 12, 2021 6:48:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T18:48:00.494Z: Workers have started successfully.
    Nov 12, 2021 6:48:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T18:48:28.246Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 12, 2021 6:48:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T18:48:28.477Z: Cleaning up.
    Nov 12, 2021 6:48:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T18:48:28.599Z: Stopping worker pool...
    Nov 12, 2021 6:50:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T18:50:41.158Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 12, 2021 6:50:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T18:50:41.211Z: Worker pool stopped.
    Nov 12, 2021 6:50:47 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-12_10_46_34-10497604061900194690 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): da7ad364-3527-41b5-b2a5-47dbdd99a5da and timestamp: 2021-11-12T18:50:47.464000000Z:
                     Metric:                    Value:
                   read_time                     7.532
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 12, 2021 6:50:47 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 5 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.038 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 31.098 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 28s
152 actionable tasks: 106 executed, 46 from cache

Publishing build scan...
https://gradle.com/s/xfts7orpov724

Stopped 4 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2660

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2660/display/redirect?page=changes>

Changes:

[Alexey Romanenko] [BEAM-8727] Bump software.amazon.awssdk to 2.17.77


------------------------------------------
[...truncated 352.69 KB...]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 12, 2021 12:49:29 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 12, 2021 12:49:30 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 12, 2021 12:49:30 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 12, 2021 12:49:33 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 12, 2021 12:49:33 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 12, 2021 12:49:34 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 12, 2021 12:49:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 12, 2021 12:49:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 12, 2021 12:49:34 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 12, 2021 12:49:35 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@731122468]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 12, 2021 12:49:35 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 12, 2021 12:49:35 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 12, 2021 12:49:35 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 12, 2021 12:49:35 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 12, 2021 12:49:35 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 12, 2021 12:49:35 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 12, 2021 12:49:36 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1266073646]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 12, 2021 12:49:36 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 12, 2021 12:49:36 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 12, 2021 12:49:36 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 12, 2021 12:49:36 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 12, 2021 12:49:36 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 12, 2021 12:49:36 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 12, 2021 12:49:36 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 12, 2021 12:49:36 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 12, 2021 12:49:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 12, 2021 12:49:40 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 12, 2021 12:49:40 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 12, 2021 12:49:41 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3825860654607296213.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ALuBhVDl58vvMlpf_6Svtsu8adIYpPQwmDMk68sPs04.jar
    Nov 12, 2021 12:49:41 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.testcontainers/testcontainers/1.15.1/91e6dfab8f141f77c6a0dd147a94bd186993a22c/testcontainers-1.15.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/testcontainers-1.15.1-_fMrLpRXSowMnAISRc73bMk5UqrXNnsv6vkPXOZRXu0.jar
    Nov 12, 2021 12:49:41 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-api/3.2.7/81408fc988c229ea11354fee9902c47842343f04/docker-java-api-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-api-3.2.7-AwtXDacySf3gUHoixVjjNqW0wUI0YcGJjMO-N7kxBho.jar
    Nov 12, 2021 12:49:41 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-transport-zerodep/3.2.7/d74cb05fabc57e9ec441dc39d0bc58ad649fad3d/docker-java-transport-zerodep-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-transport-zerodep-3.2.7-O2a9rUb-899yDZMu4E8j9ItFOkNoggqbLP2xaLmOwJY.jar
    Nov 12, 2021 12:49:41 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/de.flapdoodle.embed/de.flapdoodle.embed.mongo/2.2.0/781d14f4e3d9eeb0b4c3e00a4ec165a04b3dc5c4/de.flapdoodle.embed.mongo-2.2.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/de.flapdoodle.embed.mongo-2.2.0-vNy3lJC0jW9u4Cy1AHsqSbjRUqOTX9ycpEmHkht7vvk.jar
    Nov 12, 2021 12:49:41 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.testcontainers/kafka/1.15.1/ccd374ca7e5d8b06fb31ecffeb03abc42feada84/kafka-1.15.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/kafka-1.15.1-Z4qZ3jdWXDvdB7xD9VivkdVQiRZWNWUhPSIANVCSZDA.jar
    Nov 12, 2021 12:49:41 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-csv/1.8/37ca9a9aa2d4be2599e55506a6d3170dd7a3df4/commons-csv-1.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-csv-1.8-qL1WZS7UZo2dWjOZSuUvWbnjnI6w68tmhOaK7udXmmE.jar
    Nov 12, 2021 12:49:41 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.janino/commons-compiler/3.0.11/f2a6ec7fbc929c9fc87ff8bb486c0574951c5b09/commons-compiler-3.0.11.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-compiler-3.0.11-DxpPXyZccBoxkzJErnBF_O8YtPpZUEF-Je5wvlDd2s8.jar
    Nov 12, 2021 12:49:41 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/de.flapdoodle.embed/de.flapdoodle.embed.process/2.1.2/986b38302fa10018d5baccee7f655c31ee9afe5b/de.flapdoodle.embed.process-2.1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/de.flapdoodle.embed.process-2.1.2-OasY7D5KRAimcZcWcjFwgi8Qb4B-iff1FfrVeNSih6A.jar
    Nov 12, 2021 12:49:41 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.janino/janino/3.0.11/e699e368095379ba0402ea1780a87fcaea16e68f/janino-3.0.11.jar to gs://temp-storage-for-perf-tests/loadtests/staging/janino-3.0.11-kje3HSMpGA5ZIQ6aqhAO4xNFTvCuWIYIx1yxkxlZG-E.jar
    Nov 12, 2021 12:49:41 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-transport/3.2.7/315903a129f530422747efc163dd255f0fa2555e/docker-java-transport-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-transport-3.2.7-ynNXlSXhDSyiLCXkA4c9E9_HGeKB7-wbpEzmswFrI-A.jar
    Nov 12, 2021 12:49:42 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 241 files cached, 11 files newly uploaded in 1 seconds
    Nov 12, 2021 12:49:42 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 12, 2021 12:49:42 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106183 bytes, hash e93e2ecc0918eb060513a7e9e8ba918604b5501826de7754422d5995da98de6d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-6T4uzAkY6wYFE6fp6LqRhgS1UBgm3ndUQi1ZldqY3m0.pb
    Nov 12, 2021 12:49:44 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 12, 2021 12:49:44 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 12, 2021 12:49:44 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 12, 2021 12:49:44 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 12, 2021 12:49:45 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-12_04_49_44-5499180600761290759?project=apache-beam-testing
    Nov 12, 2021 12:49:45 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-12_04_49_44-5499180600761290759
    Nov 12, 2021 12:49:45 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-12_04_49_44-5499180600761290759
    Nov 12, 2021 12:49:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-12T12:49:48.038Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 12, 2021 12:49:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T12:49:54.596Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 12, 2021 12:49:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T12:49:55.309Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 12, 2021 12:49:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T12:49:55.360Z: Expanding GroupByKey operations into optimizable parts.
    Nov 12, 2021 12:49:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T12:49:55.401Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 12, 2021 12:49:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T12:49:55.522Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 12, 2021 12:49:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T12:49:55.555Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 12, 2021 12:49:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T12:49:55.613Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 12, 2021 12:49:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T12:49:56.078Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 12, 2021 12:49:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T12:49:56.169Z: Starting 5 workers in us-central1-c...
    Nov 12, 2021 12:50:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T12:50:26.322Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 12, 2021 12:50:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T12:50:36.506Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 12, 2021 12:51:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T12:51:01.762Z: Workers have started successfully.
    Nov 12, 2021 12:51:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T12:51:01.816Z: Workers have started successfully.
    Nov 12, 2021 12:51:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T12:51:28.280Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 12, 2021 12:51:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T12:51:28.621Z: Cleaning up.
    Nov 12, 2021 12:51:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T12:51:28.867Z: Stopping worker pool...
    Nov 12, 2021 12:53:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T12:53:50.752Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 12, 2021 12:53:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T12:53:50.820Z: Worker pool stopped.
    Nov 12, 2021 12:53:56 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-12_04_49_44-5499180600761290759 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6afef636-bee4-422a-8fd4-91a8bb973509 and timestamp: 2021-11-12T12:53:56.643000000Z:
                     Metric:                    Value:
                   read_time                     7.744
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 12, 2021 12:53:56 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 4 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 31.213 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 30s
152 actionable tasks: 102 executed, 50 from cache

Publishing build scan...
https://gradle.com/s/3naldaaouzigs

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2659

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2659/display/redirect>

Changes:


------------------------------------------
[...truncated 338.45 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is e619095b7a7c757597f6d6dc79f5d2c6
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 12, 2021 6:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 12, 2021 6:44:53 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 12, 2021 6:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 12, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 12, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 12, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 12, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 12, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 12, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 12, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 12, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 12, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 12, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 12, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 12, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 12, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 12, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1312856806]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 12, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 12, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 12, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 12, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 12, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 12, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 12, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 12, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 12, 2021 6:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 12, 2021 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 12, 2021 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 12, 2021 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8149282615699980167.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-oqBWQE7TWm8nrXmFcAkc1erVk4QTSL-aV61E7DhGy2M.jar
    Nov 12, 2021 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 12, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 12, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106180 bytes, hash ca67d58d2fd8f1b61d40478aa9087d0ad3b333ed9b3eab314a45e81b507d2987> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-ymfVjS_Y8bYdQEeKqQh9CtOzM-2bPqsxSkXoG1B9KYc.pb
    Nov 12, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 12, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 12, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 12, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 12, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-11_22_45_06-56678028344797373?project=apache-beam-testing
    Nov 12, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-11_22_45_06-56678028344797373
    Nov 12, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-11_22_45_06-56678028344797373
    Nov 12, 2021 6:45:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-12T06:45:10.226Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 12, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T06:45:17.370Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 12, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T06:45:18.005Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 12, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T06:45:18.038Z: Expanding GroupByKey operations into optimizable parts.
    Nov 12, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T06:45:18.068Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 12, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T06:45:18.132Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 12, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T06:45:18.162Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 12, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T06:45:18.196Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 12, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T06:45:18.672Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 12, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T06:45:18.829Z: Starting 5 workers in us-central1-a...
    Nov 12, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T06:45:41.709Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 12, 2021 6:46:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T06:46:03.416Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Nov 12, 2021 6:46:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T06:46:03.455Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Nov 12, 2021 6:46:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T06:46:13.772Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 12, 2021 6:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T06:46:37.959Z: Workers have started successfully.
    Nov 12, 2021 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T06:46:37.999Z: Workers have started successfully.
    Nov 12, 2021 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T06:47:06.332Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 12, 2021 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T06:47:06.491Z: Cleaning up.
    Nov 12, 2021 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T06:47:06.575Z: Stopping worker pool...
    Nov 12, 2021 6:49:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T06:49:28.848Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 12, 2021 6:49:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T06:49:28.895Z: Worker pool stopped.
    Nov 12, 2021 6:49:34 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-11_22_45_06-56678028344797373 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8dc2b02c-f386-4bd7-81aa-50d7176e7727 and timestamp: 2021-11-12T06:49:34.447000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.194

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 12, 2021 6:49:34 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 45.854 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 15s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ofpg2z52jn5wc

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2658

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2658/display/redirect?page=changes>

Changes:

[Kyle Weaver] Update Beam website to release 2.34.0.

[Kyle Weaver] Prepare docs for 2.34.0 release.

[noreply] [BEAM-13133] Loosen partitioning requirement for sample (#15818)

[noreply] [BEAM-13228] fix data race in metrics.store (#15946)

[noreply] [BEAM-3293] Add binding cases for MultiMap side inputs (#15943)

[noreply] [BEAM-13222] Re-enable Spanner integration tests (#15948)

[noreply] Merge pull request #15926 from [BEAM-13110][Playground] Playground

[noreply] [BEAM-13025] Disable deduplicating messages, as Dedupe is broken on

[noreply] [BEAM-13001] collect DoFn metrics for Combine (#15911)


------------------------------------------
[...truncated 340.82 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is e619095b7a7c757597f6d6dc79f5d2c6
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 12, 2021 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 12, 2021 12:45:07 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 12, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 12, 2021 12:45:10 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 12, 2021 12:45:10 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 12, 2021 12:45:11 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 12, 2021 12:45:11 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 12, 2021 12:45:11 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 12, 2021 12:45:11 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 12, 2021 12:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 12, 2021 12:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 12, 2021 12:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 12, 2021 12:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 12, 2021 12:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 12, 2021 12:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 12, 2021 12:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 12, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1312856806]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 12, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 12, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 12, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 12, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 12, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 12, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 12, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 12, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 12, 2021 12:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 12, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 12, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 12, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5158953719849032348.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-nTWlqp97M0BOS0vKS6vOXdaVcyS6vSYSEGHvSGTv0Hc.jar
    Nov 12, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 12, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 12, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106180 bytes, hash 1426919caf1532ef1c2782260edcec34ca76d918e75e119cb5286b173f0cc3bd> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-FCaRnK8VMu8cJ4ImDtzsNMp22RjnXhGctShrFz8Mw70.pb
    Nov 12, 2021 12:45:20 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 12, 2021 12:45:20 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 12, 2021 12:45:20 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 12, 2021 12:45:20 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 12, 2021 12:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-11_16_45_20-12599210625770205289?project=apache-beam-testing
    Nov 12, 2021 12:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-11_16_45_20-12599210625770205289
    Nov 12, 2021 12:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-11_16_45_20-12599210625770205289
    Nov 12, 2021 12:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-12T00:45:24.485Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 12, 2021 12:45:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T00:45:39.464Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 12, 2021 12:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T00:45:39.936Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 12, 2021 12:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T00:45:39.983Z: Expanding GroupByKey operations into optimizable parts.
    Nov 12, 2021 12:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T00:45:40.007Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 12, 2021 12:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T00:45:40.087Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 12, 2021 12:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T00:45:40.149Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 12, 2021 12:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T00:45:40.185Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 12, 2021 12:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T00:45:40.542Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 12, 2021 12:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T00:45:40.640Z: Starting 5 workers in us-central1-c...
    Nov 12, 2021 12:46:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T00:46:07.827Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 12, 2021 12:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T00:46:26.899Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 12, 2021 12:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T00:46:54.548Z: Workers have started successfully.
    Nov 12, 2021 12:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T00:46:54.670Z: Workers have started successfully.
    Nov 12, 2021 12:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T00:47:24.473Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 12, 2021 12:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T00:47:24.727Z: Cleaning up.
    Nov 12, 2021 12:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T00:47:24.834Z: Stopping worker pool...
    Nov 12, 2021 12:49:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T00:49:38.489Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 12, 2021 12:49:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-12T00:49:38.533Z: Worker pool stopped.
    Nov 12, 2021 12:49:53 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-11_16_45_20-12599210625770205289 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 1cde6c33-35e4-4615-8ddd-e536eacd6046 and timestamp: 2021-11-12T00:49:53.912000000Z:
                     Metric:                    Value:
                   read_time                     7.852
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 12, 2021 12:49:53 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 50.256 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 35s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/myfnw63xtquvc

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2657

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2657/display/redirect?page=changes>

Changes:

[noreply] [BEAM-12550] Parallelizable kurtosis Implementation (#15909)

[noreply] [BEAM-13016] Remove avro-python3 dependency from Beam (#15900)


------------------------------------------
[...truncated 340.45 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is d5b6562a33310a6b92fa17e5a7bd1cc4
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 11, 2021 6:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 11, 2021 6:45:17 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 11, 2021 6:45:18 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 11, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 11, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 11, 2021 6:45:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 11, 2021 6:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 11, 2021 6:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 11, 2021 6:45:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 11, 2021 6:45:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 11, 2021 6:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 11, 2021 6:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 11, 2021 6:45:23 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 11, 2021 6:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 11, 2021 6:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 11, 2021 6:45:23 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 11, 2021 6:45:23 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@177135030]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 11, 2021 6:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 11, 2021 6:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 11, 2021 6:45:23 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 11, 2021 6:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 11, 2021 6:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 11, 2021 6:45:23 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 11, 2021 6:45:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 11, 2021 6:45:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 11, 2021 6:45:24 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 11, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 11, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 11, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5604012868952250107.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-_gCgNfVj5jJJOuWT2_BO5QdTjAc57Ap9VLtgolYZX-c.jar
    Nov 11, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 11, 2021 6:45:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 11, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106182 bytes, hash fda325b805336de8ec72216d41db769c0104fb22ac457d01db18f7c93a55d9de> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-_aMluAUzbejsciFtQdt2nAEE-yKsRX0B2xj3yTpV2d4.pb
    Nov 11, 2021 6:45:31 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 11, 2021 6:45:31 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 11, 2021 6:45:31 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 11, 2021 6:45:31 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 11, 2021 6:45:32 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-11_10_45_31-2755900006312110004?project=apache-beam-testing
    Nov 11, 2021 6:45:32 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-11_10_45_31-2755900006312110004
    Nov 11, 2021 6:45:32 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-11_10_45_31-2755900006312110004
    Nov 11, 2021 6:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-11T18:45:34.868Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 11, 2021 6:45:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T18:45:43.378Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 11, 2021 6:45:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T18:45:44.289Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 11, 2021 6:45:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T18:45:44.492Z: Expanding GroupByKey operations into optimizable parts.
    Nov 11, 2021 6:45:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T18:45:44.526Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 11, 2021 6:45:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T18:45:44.652Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 11, 2021 6:45:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T18:45:44.687Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 11, 2021 6:45:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T18:45:44.726Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 11, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T18:45:45.117Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 11, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T18:45:45.196Z: Starting 5 workers in us-central1-c...
    Nov 11, 2021 6:46:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T18:46:01.840Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 11, 2021 6:46:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T18:46:22.494Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 11, 2021 6:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T18:46:50.780Z: Workers have started successfully.
    Nov 11, 2021 6:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T18:46:50.818Z: Workers have started successfully.
    Nov 11, 2021 6:47:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T18:47:21.792Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 11, 2021 6:47:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T18:47:22.105Z: Cleaning up.
    Nov 11, 2021 6:47:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T18:47:22.293Z: Stopping worker pool...
    Nov 11, 2021 6:49:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T18:49:39.722Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 11, 2021 6:49:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T18:49:39.817Z: Worker pool stopped.
    Nov 11, 2021 6:49:50 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-11_10_45_31-2755900006312110004 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d035385f-20b7-4003-8555-b7efb2807173 and timestamp: 2021-11-11T18:49:50.270000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.385

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 11, 2021 6:49:50 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 37.255 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 29s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/yuerpdvmfnosm

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2656

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2656/display/redirect?page=changes>

Changes:

[noreply] BEAM-13189 Python TextIO: add escapechar feature. (#15901)


------------------------------------------
[...truncated 344.35 KB...]
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':',5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is d5b6562a33310a6b92fa17e5a7bd1cc4
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 11, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 11, 2021 12:45:08 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 11, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 11, 2021 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 11, 2021 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 11, 2021 12:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 11, 2021 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 11, 2021 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 11, 2021 12:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 11, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 11, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 11, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 11, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 11, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 11, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 11, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 11, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1312856806]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 11, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 11, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 11, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 11, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 11, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 11, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 11, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 11, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 11, 2021 12:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 11, 2021 12:45:17 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 11, 2021 12:45:17 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 11, 2021 12:45:17 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6637363412112774359.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-KeHyeJft7g0dzrZ2vMl4bHbd-7RhdsxxqSBxKpBwKoo.jar
    Nov 11, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 11, 2021 12:45:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 11, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106180 bytes, hash d6e784745dc29c34276e67c1d8b8a788ac35d47e44a510cad9a997d127765bf1> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-1ueEdF3CnDQnbmfB2LiniKw11H5EpRDK2amX0Sd2W_E.pb
    Nov 11, 2021 12:45:20 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 11, 2021 12:45:20 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 11, 2021 12:45:20 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 11, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 11, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-11_04_45_21-4803093369646146832?project=apache-beam-testing
    Nov 11, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-11_04_45_21-4803093369646146832
    Nov 11, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-11_04_45_21-4803093369646146832
    Nov 11, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-11T12:45:24.375Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 11, 2021 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T12:45:34.901Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 11, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T12:45:35.438Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 11, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T12:45:35.488Z: Expanding GroupByKey operations into optimizable parts.
    Nov 11, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T12:45:35.534Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 11, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T12:45:35.617Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 11, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T12:45:35.646Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 11, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T12:45:35.681Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 11, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T12:45:36.138Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 11, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T12:45:36.216Z: Starting 5 workers in us-central1-c...
    Nov 11, 2021 12:45:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T12:45:46.178Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 11, 2021 12:46:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T12:46:15.942Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 11, 2021 12:46:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T12:46:43.025Z: Workers have started successfully.
    Nov 11, 2021 12:46:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T12:46:43.064Z: Workers have started successfully.
    Nov 11, 2021 12:47:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T12:47:14.043Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 11, 2021 12:47:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T12:47:14.405Z: Cleaning up.
    Nov 11, 2021 12:47:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T12:47:14.512Z: Stopping worker pool...
    Nov 11, 2021 12:49:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T12:49:34.058Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 11, 2021 12:49:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T12:49:34.115Z: Worker pool stopped.
    Nov 11, 2021 12:49:39 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-11_04_45_21-4803093369646146832 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 75fd4485-d4f2-43f1-bb89-9fe1e8c167aa and timestamp: 2021-11-11T12:49:39.881000000Z:
                     Metric:                    Value:
                   read_time                     9.537
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 11, 2021 12:49:39 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 37.624 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 21s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/psfmk2oqp5c4g

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2655

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2655/display/redirect?page=changes>

Changes:

[asottile] remove unused future / futures dependencies

[avilovpavel6] Fix environment_service_test.go

[noreply] [BEAM-12651] Exclude packages from jacoco report (#15792)

[noreply] [BEAM-13222] Skip spanner integration tests (#15942)


------------------------------------------
[...truncated 357.55 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.
Gradle Test Executor 6 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is d5b6562a33310a6b92fa17e5a7bd1cc4
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 6'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 6'
Successfully started process 'Gradle Test Executor 6'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 11, 2021 6:48:17 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 11, 2021 6:48:18 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 11, 2021 6:48:19 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 11, 2021 6:48:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 11, 2021 6:48:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 11, 2021 6:48:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 11, 2021 6:48:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 11, 2021 6:48:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 11, 2021 6:48:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 11, 2021 6:48:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 11, 2021 6:48:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 11, 2021 6:48:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 11, 2021 6:48:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 11, 2021 6:48:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 11, 2021 6:48:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 11, 2021 6:48:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 11, 2021 6:48:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@177135030]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 11, 2021 6:48:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 11, 2021 6:48:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 11, 2021 6:48:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 11, 2021 6:48:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 11, 2021 6:48:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 11, 2021 6:48:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 11, 2021 6:48:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 11, 2021 6:48:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 11, 2021 6:48:24 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 11, 2021 6:48:28 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 11, 2021 6:48:28 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 11, 2021 6:48:28 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8788901965407165181.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-G6binjR-1yBS9CalWHTJcEi-_EAozbAGzpNzlqjMLDU.jar
    Nov 11, 2021 6:48:28 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 11, 2021 6:48:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 11, 2021 6:48:28 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106180 bytes, hash a4d4d97e321b60041cd979064dda82db137a606a9b2626f73b7214851ac6671c> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-pNTZfjIbYAQc2XkGTdqC2xN6YGqbJib3O3IUhRrGZxw.pb
    Nov 11, 2021 6:48:31 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 11, 2021 6:48:31 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 11, 2021 6:48:31 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 11, 2021 6:48:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 11, 2021 6:48:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-10_22_48_31-1464770522878810151?project=apache-beam-testing
    Nov 11, 2021 6:48:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-10_22_48_31-1464770522878810151
    Nov 11, 2021 6:48:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-10_22_48_31-1464770522878810151
    Nov 11, 2021 6:48:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-11T06:48:34.762Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 11, 2021 6:48:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T06:48:42.659Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 11, 2021 6:48:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T06:48:43.464Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 11, 2021 6:48:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T06:48:43.504Z: Expanding GroupByKey operations into optimizable parts.
    Nov 11, 2021 6:48:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T06:48:43.534Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 11, 2021 6:48:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T06:48:43.600Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 11, 2021 6:48:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T06:48:43.631Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 11, 2021 6:48:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T06:48:43.700Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 11, 2021 6:48:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T06:48:44.127Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 11, 2021 6:48:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T06:48:44.220Z: Starting 5 workers in us-central1-c...
    Nov 11, 2021 6:48:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T06:48:51.761Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 11, 2021 6:49:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T06:49:25.083Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 11, 2021 6:49:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T06:49:52.474Z: Workers have started successfully.
    Nov 11, 2021 6:49:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T06:49:52.517Z: Workers have started successfully.
    Nov 11, 2021 6:50:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T06:50:22.535Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 11, 2021 6:50:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T06:50:22.884Z: Cleaning up.
    Nov 11, 2021 6:50:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T06:50:23.036Z: Stopping worker pool...
    Nov 11, 2021 6:52:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T06:52:49.754Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 11, 2021 6:52:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T06:52:49.826Z: Worker pool stopped.
    Nov 11, 2021 6:52:57 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-10_22_48_31-1464770522878810151 finished with status DONE.


Gradle Test Executor 6 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): a67a7aa9-28fb-40e2-ae4f-8d2d1293a45b and timestamp: 2021-11-11T06:52:57.595000000Z:
                     Metric:                    Value:
                   read_time                      9.58
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 11, 2021 6:52:57 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

3 tests completed, 2 failed
Finished generating test XML results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 43.933 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 8m 39s
152 actionable tasks: 111 executed, 41 from cache

Publishing build scan...
https://gradle.com/s/6tpy3hxofguhk

Stopped 5 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2654

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2654/display/redirect?page=changes>

Changes:

[sergey.kalinin] Initial add for dockerfiles and build scripts

[sergey.kalinin] Initial add for terraform scripts for infra provision

[sergey.kalinin] Add licenses

[sergey.kalinin] Add fixes to docker plugin and also build arg for beam sdk image

[sergey.kalinin] Remove wrongly added comments

[sergey.kalinin] Add beam playground applications deploy

[sergey.kalinin] Remove wrongly added log file

[sergey.kalinin] Fix syntax

[sergey.kalinin] Added default values

[sergey.kalinin] Add default value to base image

[sergey.kalinin] Added Beam Version as build arg, also changed way of getting tags for

[sergey.kalinin] Added readme file

[sergey.kalinin] Add workflow to build backend application

[sergey.kalinin] Add license

[sergey.kalinin] Add frontend workflow

[sergey.kalinin] Add license to README and added reader access to bucket

[sergey.kalinin] Remove trailing whitespace

[relax] for existing tables, no need to set a schema

[sergey.kalinin] Remove comment from dockerfile

[noreply] [BEAM-13190] Elide trailing nils (#15920)

[noreply] [BEAM-13052] Increment protobuf version to 3.19.0 (#15883)

[noreply] [BEAM-3304] refactoring trigger package in GoSDK (#15933)

[noreply] [BEAM-13001] Hook to configure sampling period for DoFn metric

[noreply] [BEAM-3293] Add MultiMap side input decomposition (#15934)


------------------------------------------
[...truncated 340.38 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 6fd099702df11b3053021a45479ec265
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 11, 2021 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 11, 2021 12:45:08 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 11, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 11, 2021 12:45:11 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 11, 2021 12:45:11 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 11, 2021 12:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 11, 2021 12:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 11, 2021 12:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 11, 2021 12:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 11, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@731122468]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 11, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 11, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 11, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 11, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 11, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 11, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 11, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@16967185]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 11, 2021 12:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 11, 2021 12:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 11, 2021 12:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 11, 2021 12:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 11, 2021 12:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 11, 2021 12:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 11, 2021 12:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 11, 2021 12:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 11, 2021 12:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 11, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 11, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 11, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5011970287011086158.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-qYQ1yLsZ9ZwjnVc5SSK7Fzo5Mm__7v_Wi7UjF30JXJE.jar
    Nov 11, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 11, 2021 12:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 11, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106176 bytes, hash 7fb86f466dd44d305616d1a5c1f67d3ffb3126229754de62be72828132818a1b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-f7hvRm3UTTBWFtGlwfZ9P_sxJiKXVN5ivnKCgTKBihs.pb
    Nov 11, 2021 12:45:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 11, 2021 12:45:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 11, 2021 12:45:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 11, 2021 12:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 11, 2021 12:45:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-10_16_45_21-12039423120605157065?project=apache-beam-testing
    Nov 11, 2021 12:45:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-10_16_45_21-12039423120605157065
    Nov 11, 2021 12:45:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-10_16_45_21-12039423120605157065
    Nov 11, 2021 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-11T00:45:24.749Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 11, 2021 12:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T00:45:35.432Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 11, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T00:45:36.104Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 11, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T00:45:36.155Z: Expanding GroupByKey operations into optimizable parts.
    Nov 11, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T00:45:36.190Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 11, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T00:45:36.284Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 11, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T00:45:36.320Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 11, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T00:45:36.348Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 11, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T00:45:36.770Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 11, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T00:45:36.878Z: Starting 5 workers in us-central1-a...
    Nov 11, 2021 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T00:45:48.020Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 11, 2021 12:46:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T00:46:20.629Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 11, 2021 12:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T00:46:47.797Z: Workers have started successfully.
    Nov 11, 2021 12:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T00:46:47.827Z: Workers have started successfully.
    Nov 11, 2021 12:47:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T00:47:15.266Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 11, 2021 12:47:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T00:47:15.415Z: Cleaning up.
    Nov 11, 2021 12:47:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T00:47:15.489Z: Stopping worker pool...
    Nov 11, 2021 12:49:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T00:49:44.147Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 11, 2021 12:49:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-11T00:49:44.215Z: Worker pool stopped.
    Nov 11, 2021 12:49:50 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-10_16_45_21-12039423120605157065 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 157e9faf-1a80-4939-b8d9-f8946f51dd10 and timestamp: 2021-11-11T00:49:50.772000000Z:
                     Metric:                    Value:
                   read_time                     8.048
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 11, 2021 12:49:50 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 46.683 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 31s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/pvix457undzrk

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2653

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2653/display/redirect>

Changes:


------------------------------------------
[...truncated 343.25 KB...]
Invalidating in-memory cache of /home/jenkins/.gradle/caches/6.9.1/fileHashes/fileHashes.bin
Invalidating in-memory cache of /home/jenkins/.gradle/caches/6.9.1/fileHashes/resourceHashesCache.bin
Invalidating in-memory cache of /home/jenkins/.gradle/caches/6.9.1/fileHashes/fileHashes.bin
Invalidating in-memory cache of /home/jenkins/.gradle/caches/6.9.1/fileHashes/resourceHashesCache.bin
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is e5281482b1a68147ea00c9589f98ed3e
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 10, 2021 6:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 10, 2021 6:45:12 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 10, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 10, 2021 6:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 10, 2021 6:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 10, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 10, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 10, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 10, 2021 6:45:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 10, 2021 6:45:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@731122468]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 10, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 10, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 10, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 10, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 10, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 10, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 10, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2049683411]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 10, 2021 6:45:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 10, 2021 6:45:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 10, 2021 6:45:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 10, 2021 6:45:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 10, 2021 6:45:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 10, 2021 6:45:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 10, 2021 6:45:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 10, 2021 6:45:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 10, 2021 6:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 10, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 10, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 10, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4971506752966824556.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-hOrCOaQb7sc68UEw6LYyYKQ2uZLsOgupn1YP0St3fBU.jar
    Nov 10, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 1 seconds
    Nov 10, 2021 6:45:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 10, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106178 bytes, hash 9aedcb2aaf3924fb4e0ad076b29c30b6511fba99319898b2aead8168a3c98288> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-mu3LKq85JPtOCtB2spwwtlEfupkxmJiyrq2BaKPJgog.pb
    Nov 10, 2021 6:45:30 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 10, 2021 6:45:31 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 10, 2021 6:45:31 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 10, 2021 6:45:31 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 10, 2021 6:45:31 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-10_10_45_31-15685719834090920265?project=apache-beam-testing
    Nov 10, 2021 6:45:31 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-10_10_45_31-15685719834090920265
    Nov 10, 2021 6:45:31 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-10_10_45_31-15685719834090920265
    Nov 10, 2021 6:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-10T18:45:34.490Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 10, 2021 6:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T18:45:42.552Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 10, 2021 6:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T18:45:43.128Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 10, 2021 6:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T18:45:43.173Z: Expanding GroupByKey operations into optimizable parts.
    Nov 10, 2021 6:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T18:45:43.194Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 10, 2021 6:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T18:45:43.292Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 10, 2021 6:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T18:45:43.324Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 10, 2021 6:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T18:45:43.365Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 10, 2021 6:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T18:45:43.848Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 10, 2021 6:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T18:45:43.936Z: Starting 5 workers in us-central1-c...
    Nov 10, 2021 6:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T18:46:10.646Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 10, 2021 6:46:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T18:46:22.351Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 10, 2021 6:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T18:46:47.918Z: Workers have started successfully.
    Nov 10, 2021 6:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T18:46:47.958Z: Workers have started successfully.
    Nov 10, 2021 6:47:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T18:47:17.443Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 10, 2021 6:47:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T18:47:17.632Z: Cleaning up.
    Nov 10, 2021 6:47:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T18:47:17.739Z: Stopping worker pool...
    Nov 10, 2021 6:49:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T18:49:37.317Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 10, 2021 6:49:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T18:49:37.457Z: Worker pool stopped.
    Nov 10, 2021 6:49:43 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-10_10_45_31-15685719834090920265 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 85886b0a-ea0f-4413-9fe7-14ca217d002a and timestamp: 2021-11-10T18:49:43.388000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.222

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 10, 2021 6:49:43 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 40.332 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 22s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/xus3gggzmvq7i

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2652

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2652/display/redirect>

Changes:


------------------------------------------
[...truncated 339.25 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is e5281482b1a68147ea00c9589f98ed3e
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 10, 2021 12:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 10, 2021 12:45:01 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 10, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 10, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 10, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 10, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 10, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 10, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 10, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 10, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@731122468]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 10, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 10, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 10, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 10, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 10, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 10, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 10, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1266073646]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 10, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 10, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 10, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 10, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 10, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 10, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 10, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 10, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 10, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 10, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 10, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 10, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3701133115307988211.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-bciGoLuEaUzeEeyIbIvmaQPa6KqXWIzQdJP1B1pMT2A.jar
    Nov 10, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 10, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 10, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106178 bytes, hash 11b66a43de52520318f3605311126fad695c5dadd9d571e3eb9a5fb20b8c4da5> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-EbZqQ95SUgMY82BTERJvrWlcXa3Z1XHj65pfsguMTaU.pb
    Nov 10, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 10, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 10, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 10, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 10, 2021 12:45:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-10_04_45_13-2524328068294251240?project=apache-beam-testing
    Nov 10, 2021 12:45:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-10_04_45_13-2524328068294251240
    Nov 10, 2021 12:45:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-10_04_45_13-2524328068294251240
    Nov 10, 2021 12:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-10T12:45:37.341Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 10, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T12:45:42.489Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 10, 2021 12:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T12:45:43.206Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 10, 2021 12:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T12:45:43.264Z: Expanding GroupByKey operations into optimizable parts.
    Nov 10, 2021 12:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T12:45:43.304Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 10, 2021 12:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T12:45:43.367Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 10, 2021 12:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T12:45:43.393Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 10, 2021 12:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T12:45:43.419Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 10, 2021 12:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T12:45:43.687Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 10, 2021 12:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T12:45:43.760Z: Starting 5 workers in us-central1-a...
    Nov 10, 2021 12:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T12:45:59.191Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 10, 2021 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T12:46:31.522Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 10, 2021 12:46:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T12:46:58.036Z: Workers have started successfully.
    Nov 10, 2021 12:46:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T12:46:58.065Z: Workers have started successfully.
    Nov 10, 2021 12:47:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T12:47:23.983Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 10, 2021 12:47:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T12:47:24.110Z: Cleaning up.
    Nov 10, 2021 12:47:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T12:47:24.180Z: Stopping worker pool...
    Nov 10, 2021 12:49:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T12:49:41.481Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 10, 2021 12:49:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T12:49:41.554Z: Worker pool stopped.
    Nov 10, 2021 12:49:47 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-10_04_45_13-2524328068294251240 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6edbe75e-ef12-483f-b08a-2e6cb4f3f5c0 and timestamp: 2021-11-10T12:49:47.153000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.487

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 10, 2021 12:49:47 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 51.03 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 25s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/g45seuyk6vlwq

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2651

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2651/display/redirect>

Changes:


------------------------------------------
[...truncated 342.11 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is e5281482b1a68147ea00c9589f98ed3e
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 10, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 10, 2021 6:45:11 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 10, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 10, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 10, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 10, 2021 6:45:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 10, 2021 6:45:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 10, 2021 6:45:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 10, 2021 6:45:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 10, 2021 6:45:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 10, 2021 6:45:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 10, 2021 6:45:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 10, 2021 6:45:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 10, 2021 6:45:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 10, 2021 6:45:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 10, 2021 6:45:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 10, 2021 6:45:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1312856806]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 10, 2021 6:45:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 10, 2021 6:45:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 10, 2021 6:45:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 10, 2021 6:45:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 10, 2021 6:45:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 10, 2021 6:45:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 10, 2021 6:45:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 10, 2021 6:45:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 10, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 10, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 10, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 10, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test978095872816760055.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-5plzAdtApwjKRGqu9vjFq-Z-VxEg3bHNy2S5EDF0E5k.jar
    Nov 10, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 10, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 10, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106180 bytes, hash a8ba50948a407adfb68244bda05feb2ea5526370401f17989800f297094f38cf> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-qLpQlIpAet-2gkS9oF_rLqVSY3BAHxeYmADylwlPOM8.pb
    Nov 10, 2021 6:45:24 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 10, 2021 6:45:24 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 10, 2021 6:45:24 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 10, 2021 6:45:24 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 10, 2021 6:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-09_22_45_24-14798201020113697473?project=apache-beam-testing
    Nov 10, 2021 6:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-09_22_45_24-14798201020113697473
    Nov 10, 2021 6:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-09_22_45_24-14798201020113697473
    Nov 10, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-10T06:45:27.745Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 10, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T06:45:33.426Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 10, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T06:45:34.030Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 10, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T06:45:34.068Z: Expanding GroupByKey operations into optimizable parts.
    Nov 10, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T06:45:34.104Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 10, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T06:45:34.174Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 10, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T06:45:34.209Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 10, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T06:45:34.246Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 10, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T06:45:34.588Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 10, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T06:45:34.667Z: Starting 5 workers in us-central1-a...
    Nov 10, 2021 6:46:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T06:46:05.889Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 10, 2021 6:46:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T06:46:14.898Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 10, 2021 6:46:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T06:46:40.236Z: Workers have started successfully.
    Nov 10, 2021 6:46:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T06:46:40.263Z: Workers have started successfully.
    Nov 10, 2021 6:47:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T06:47:10.825Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 10, 2021 6:47:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T06:47:11.031Z: Cleaning up.
    Nov 10, 2021 6:47:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T06:47:11.111Z: Stopping worker pool...
    Nov 10, 2021 6:49:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T06:49:37.236Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 10, 2021 6:49:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T06:49:37.279Z: Worker pool stopped.
    Nov 10, 2021 6:49:44 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-09_22_45_24-14798201020113697473 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 5389b35e-1294-4013-8496-98e05316bc89 and timestamp: 2021-11-10T06:49:44.086000000Z:
                     Metric:                    Value:
                   read_time                     8.574
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 10, 2021 6:49:44 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 37.772 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 26s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/ih7fkxx7r7qny

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2650

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2650/display/redirect?page=changes>

Changes:

[Kyle Weaver] [BEAM-12976] Add Java interfaces for projection pushdown.

[Kyle Weaver] fix cast

[Kyle Weaver] [BEAM-12976] Address review comments.

[Kyle Weaver] Hard-code output tag instead of making it public.

[Kyle Weaver] revert MAIN_OUTPUT_TAG to private

[Kyle Weaver] Remove WITHOUT_FIELD_ORDERING. Field reordering will be a prerequisite

[dpcollins] Add detection of BundleFinalizers to DoFnSignatures

[dpcollins] Add enable_prime to the unified worker experiment list.

[dpcollins] Raise an exception before trying to launch a pipeline using

[dpcollins] Fix detection to properly null check.

[noreply] Merge pull request #15771 from Enable BQ Standard SQL dialect style

[noreply] [BEAM-11217] Metrics querying for Pcol metrics (#15923)


------------------------------------------
[...truncated 368.39 KB...]
Gradle Test Executor 9 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is e5281482b1a68147ea00c9589f98ed3e
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 9'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 9'
Successfully started process 'Gradle Test Executor 9'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 10, 2021 12:47:50 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 10, 2021 12:47:50 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 10, 2021 12:47:51 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 10, 2021 12:47:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 10, 2021 12:47:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 10, 2021 12:47:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 10, 2021 12:47:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 10, 2021 12:47:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 10, 2021 12:47:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 10, 2021 12:47:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 10, 2021 12:47:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 10, 2021 12:47:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 10, 2021 12:47:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 10, 2021 12:47:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 10, 2021 12:47:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 10, 2021 12:47:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 10, 2021 12:47:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1312856806]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 10, 2021 12:47:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 10, 2021 12:47:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 10, 2021 12:47:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 10, 2021 12:47:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 10, 2021 12:47:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 10, 2021 12:47:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 10, 2021 12:47:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 10, 2021 12:47:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 10, 2021 12:47:56 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 10, 2021 12:47:59 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 10, 2021 12:47:59 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 10, 2021 12:47:59 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6771900879617117028.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Nev2uHTpTfHVHn-GdsZdV7wBlNAyjBn2jPXiYNJEo7w.jar
    Nov 10, 2021 12:47:59 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-tests-xgCXKHPXZ7tKll7Etms0F7lqQhG2aj4mwApiqGREv6g.jar
    Nov 10, 2021 12:47:59 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-FOFIGu4dd7K29gSayvzjWLojwhwukgoZAnM1F1uFR1U.jar
    Nov 10, 2021 12:48:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 3 files newly uploaded in 0 seconds
    Nov 10, 2021 12:48:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 10, 2021 12:48:00 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106180 bytes, hash e03dc24fda23f29698c6f29c36a5e86971018e9eb79d58365088abdd20ef0bcb> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-4D3CT9oj8paYxvKcNqXoaXEBjp63nVg2UIir3SDvC8s.pb
    Nov 10, 2021 12:48:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 10, 2021 12:48:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 10, 2021 12:48:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 10, 2021 12:48:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 10, 2021 12:48:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-09_16_48_02-8334529904552290552?project=apache-beam-testing
    Nov 10, 2021 12:48:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-09_16_48_02-8334529904552290552
    Nov 10, 2021 12:48:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-09_16_48_02-8334529904552290552
    Nov 10, 2021 12:48:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-10T00:48:06.098Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 10, 2021 12:48:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T00:48:13.423Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 10, 2021 12:48:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T00:48:14.390Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 10, 2021 12:48:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T00:48:14.430Z: Expanding GroupByKey operations into optimizable parts.
    Nov 10, 2021 12:48:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T00:48:14.459Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 10, 2021 12:48:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T00:48:14.511Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 10, 2021 12:48:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T00:48:14.549Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 10, 2021 12:48:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T00:48:14.584Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 10, 2021 12:48:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T00:48:15.024Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 10, 2021 12:48:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T00:48:15.127Z: Starting 5 workers in us-central1-c...
    Nov 10, 2021 12:48:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T00:48:36.379Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 10, 2021 12:49:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T00:49:00.970Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 10, 2021 12:49:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T00:49:26.689Z: Workers have started successfully.
    Nov 10, 2021 12:49:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T00:49:26.724Z: Workers have started successfully.
    Nov 10, 2021 12:49:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T00:49:51.838Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 10, 2021 12:49:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T00:49:51.998Z: Cleaning up.
    Nov 10, 2021 12:49:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T00:49:52.111Z: Stopping worker pool...
    Nov 10, 2021 12:52:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T00:52:26.739Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 10, 2021 12:52:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-10T00:52:26.778Z: Worker pool stopped.
    Nov 10, 2021 12:52:33 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-09_16_48_02-8334529904552290552 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d005b96e-dd8a-434c-962b-f338a77c9987 and timestamp: 2021-11-10T00:52:33.236000000Z:
                     Metric:                    Value:
                   read_time                     6.971
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 10, 2021 12:52:33 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 9 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 46.743 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 8m 13s
152 actionable tasks: 116 executed, 36 from cache

Publishing build scan...
https://gradle.com/s/v6cddbmm5zpcy

Stopped 8 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2649

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2649/display/redirect?page=changes>

Changes:

[msbukal] Minor HL7v2IO Fixes

[Andrew Pilloud] [BEAM-13056] Expose FieldAccess in DoFnSchemaInformation

[noreply] [BEAM-11097] Create hook to enable cross-bundle side input caching

[noreply] Merge pull request #15804 from [BEAM-13109][Playground] Add processing

[noreply] Improve FhirIO LRO Counters + minor fixes (#15921)


------------------------------------------
[...truncated 361.94 KB...]
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.
Gradle Test Executor 8 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3ddef3fb6d4c7b161005a9514fc6dbf8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 8'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 8'
Successfully started process 'Gradle Test Executor 8'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 09, 2021 6:54:41 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 09, 2021 6:54:41 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 09, 2021 6:54:42 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 09, 2021 6:54:44 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 09, 2021 6:54:44 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 09, 2021 6:54:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 09, 2021 6:54:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 09, 2021 6:54:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 09, 2021 6:54:46 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 09, 2021 6:54:46 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 09, 2021 6:54:46 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 09, 2021 6:54:46 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 09, 2021 6:54:46 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 09, 2021 6:54:46 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 09, 2021 6:54:46 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 09, 2021 6:54:46 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 09, 2021 6:54:47 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1312856806]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 09, 2021 6:54:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 09, 2021 6:54:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 09, 2021 6:54:47 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 09, 2021 6:54:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 09, 2021 6:54:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 09, 2021 6:54:47 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 09, 2021 6:54:47 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 09, 2021 6:54:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 09, 2021 6:54:47 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 09, 2021 6:54:51 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 09, 2021 6:54:51 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 09, 2021 6:54:51 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-tests-JM8_o0pMdLhqj4cXrz4KbyDUdWQ6c_v1IXahYbXjfb8.jar
    Nov 09, 2021 6:54:51 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test9010914290423903298.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-NyqwkFV8cJK9h5mv3v46sjpWM54y2ZK1llsE3csA4lM.jar
    Nov 09, 2021 6:54:52 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 250 files cached, 2 files newly uploaded in 0 seconds
    Nov 09, 2021 6:54:52 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 09, 2021 6:54:52 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <106183 bytes, hash f04ff3a05e71dfb12fe316817de8a21f9524e60af2f2d1243d4038ec895023ba> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-8E_zoF5x37Ev4xaBfeiiH5Uk5gry8tEkPUA47IlQI7o.pb
    Nov 09, 2021 6:54:54 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 09, 2021 6:54:54 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 09, 2021 6:54:54 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 09, 2021 6:54:54 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 09, 2021 6:54:55 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-09_10_54_54-11118469639837937938?project=apache-beam-testing
    Nov 09, 2021 6:54:55 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-09_10_54_54-11118469639837937938
    Nov 09, 2021 6:54:55 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-09_10_54_54-11118469639837937938
    Nov 09, 2021 6:55:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-09T18:54:58.043Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 09, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T18:55:05.162Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 09, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T18:55:06.033Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 09, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T18:55:06.087Z: Expanding GroupByKey operations into optimizable parts.
    Nov 09, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T18:55:06.134Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 09, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T18:55:06.219Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 09, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T18:55:06.257Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 09, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T18:55:06.299Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 09, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T18:55:06.745Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 09, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T18:55:07.003Z: Starting 5 workers in us-central1-c...
    Nov 09, 2021 6:55:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T18:55:17.863Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 09, 2021 6:55:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T18:55:47.235Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 09, 2021 6:56:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T18:56:12.558Z: Workers have started successfully.
    Nov 09, 2021 6:56:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T18:56:12.596Z: Workers have started successfully.
    Nov 09, 2021 6:56:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T18:56:42.604Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 09, 2021 6:56:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T18:56:42.784Z: Cleaning up.
    Nov 09, 2021 6:56:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T18:56:42.862Z: Stopping worker pool...
    Nov 09, 2021 6:59:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T18:59:03.070Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 09, 2021 6:59:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T18:59:03.126Z: Worker pool stopped.
    Nov 09, 2021 6:59:09 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-09_10_54_54-11118469639837937938 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): b2bae739-bb27-4cc3-af29-a04ca3b1d3f4 and timestamp: 2021-11-09T18:59:09.538000000Z:
                     Metric:                    Value:
                   read_time                     8.836
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 09, 2021 6:59:09 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 8 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 31.941 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 8m 23s
152 actionable tasks: 112 executed, 40 from cache

Publishing build scan...
https://gradle.com/s/bmuj73xanxqyg

Stopped 7 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2648

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2648/display/redirect>

Changes:


------------------------------------------
[...truncated 340.15 KB...]
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 450f749dec2d99bcd0853d8a4516c7a0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 09, 2021 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 09, 2021 12:44:53 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 09, 2021 12:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 09, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 09, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 09, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 09, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@731122468]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 09, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 09, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 09, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 09, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 09, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 09, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 09, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1266073646]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 09, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 09, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 09, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 09, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 09, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 09, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 09, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 09, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 09, 2021 12:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 09, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 09, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 09, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4741640783955234572.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-99SonIVf7LKrWPZa50LbF0oUXdgBhboxfjT8UIGRid8.jar
    Nov 09, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 09, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 09, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <105622 bytes, hash 4a53ddd4188b5074202884cf7e8cefbdd3e556bc1d81e3c60a91762a26a63600> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-SlPd1BiLUHQgKITPfozvvdPlVrwdgePGCpF2KiamNgA.pb
    Nov 09, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 09, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 09, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 09, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 09, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-09_04_45_07-15064057953547780647?project=apache-beam-testing
    Nov 09, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-09_04_45_07-15064057953547780647
    Nov 09, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-09_04_45_07-15064057953547780647
    Nov 09, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-09T12:45:10.790Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 09, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T12:45:20.517Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 09, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T12:45:21.201Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 09, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T12:45:21.260Z: Expanding GroupByKey operations into optimizable parts.
    Nov 09, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T12:45:21.302Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 09, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T12:45:21.403Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 09, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T12:45:21.438Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 09, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T12:45:21.480Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 09, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T12:45:21.954Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 09, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T12:45:22.062Z: Starting 5 workers in us-central1-c...
    Nov 09, 2021 12:45:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T12:45:53.595Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 09, 2021 12:46:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T12:46:06.859Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 09, 2021 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T12:46:33.259Z: Workers have started successfully.
    Nov 09, 2021 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T12:46:33.371Z: Workers have started successfully.
    Nov 09, 2021 12:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T12:47:02.574Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 09, 2021 12:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T12:47:03.113Z: Cleaning up.
    Nov 09, 2021 12:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T12:47:03.514Z: Stopping worker pool...
    Nov 09, 2021 12:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T12:49:22.178Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 09, 2021 12:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T12:49:22.291Z: Worker pool stopped.
    Nov 09, 2021 12:49:28 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-09_04_45_07-15064057953547780647 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): e27ce9d3-cff1-429f-8505-defb71524b47 and timestamp: 2021-11-09T12:49:28.715000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.877

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 09, 2021 12:49:28 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 39.764 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 10s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ytumvp3b3p5ws

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2647

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2647/display/redirect>

Changes:


------------------------------------------
[...truncated 338.97 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 450f749dec2d99bcd0853d8a4516c7a0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 09, 2021 6:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 09, 2021 6:44:55 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 09, 2021 6:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 09, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 09, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 09, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 09, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 09, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 09, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 09, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1346947850]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 09, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 09, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 09, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 09, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 09, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 09, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 09, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@499588094]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 09, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 09, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 09, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 09, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 09, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 09, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 09, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 09, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 09, 2021 6:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 09, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 09, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 09, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6442264174924611019.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-VuZJcDQDVuiD6Fe5IbwXM5URK0NocqQuDLzW9IkEcRg.jar
    Nov 09, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 09, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 09, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <105627 bytes, hash b9334ba282817c574f8f39dc72336400164ee635077669009f254d935a297d69> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-uTNLooKBfFdPjznccjNkABZO5jUHdmkAnyVNk1opfWk.pb
    Nov 09, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 09, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 09, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 09, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 09, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-08_22_45_08-18224347272897797043?project=apache-beam-testing
    Nov 09, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-08_22_45_08-18224347272897797043
    Nov 09, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-08_22_45_08-18224347272897797043
    Nov 09, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-09T06:45:11.801Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 09, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T06:45:21.711Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 09, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T06:45:22.439Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 09, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T06:45:22.491Z: Expanding GroupByKey operations into optimizable parts.
    Nov 09, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T06:45:22.530Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 09, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T06:45:22.619Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 09, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T06:45:22.646Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 09, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T06:45:22.681Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 09, 2021 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T06:45:23.091Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 09, 2021 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T06:45:23.215Z: Starting 5 workers in us-central1-c...
    Nov 09, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T06:45:28.681Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 09, 2021 6:46:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T06:46:02.982Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 09, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T06:46:28.695Z: Workers have started successfully.
    Nov 09, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T06:46:28.737Z: Workers have started successfully.
    Nov 09, 2021 6:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T06:46:58.807Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 09, 2021 6:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T06:46:58.981Z: Cleaning up.
    Nov 09, 2021 6:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T06:46:59.079Z: Stopping worker pool...
    Nov 09, 2021 6:49:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T06:49:19.413Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 09, 2021 6:49:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T06:49:19.505Z: Worker pool stopped.
    Nov 09, 2021 6:49:25 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-08_22_45_08-18224347272897797043 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 99e9b52d-b7ae-415f-a244-e07f843555c9 and timestamp: 2021-11-09T06:49:25.227000000Z:
                     Metric:                    Value:
                   read_time                     6.863
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 09, 2021 6:49:25 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 35.011 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 6s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ojmch74xyhmj6

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2646

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2646/display/redirect?page=changes>

Changes:

[aydar.zaynutdinov] [BEAM-13155][Playground]

[noreply] Merge pull request #15873 from [BEAM-13181] Remove Sharding from

[noreply] Merge pull request #15813 from [BEAM-13071] [Playground] Update run code

[Kyle Weaver] [BEAM-5172] Temporary ignore testSplit and testSizes tests waiting for a


------------------------------------------
[...truncated 340.59 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 450f749dec2d99bcd0853d8a4516c7a0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 09, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 09, 2021 12:45:10 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 09, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 09, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 09, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 09, 2021 12:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 09, 2021 12:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 09, 2021 12:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 09, 2021 12:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 09, 2021 12:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 09, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 09, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 09, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 09, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 09, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 09, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 09, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1312856806]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 09, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 09, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 09, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 09, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 09, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 09, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 09, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 09, 2021 12:45:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 09, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 09, 2021 12:45:19 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 09, 2021 12:45:19 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 09, 2021 12:45:19 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test912858338024367067.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-bT18y5iFpVZahhxqKZ59gI4bdva3iG7Lq1W_ReRNRKc.jar
    Nov 09, 2021 12:45:19 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 09, 2021 12:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 09, 2021 12:45:19 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <105625 bytes, hash 214000144d7a965ece402bf8b2a6785091464acf506fa8b325f95837f8c1ad0b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-IUAAFE16ll7OQCv4sqZ4UJFGSs9Qb6izJflYN_jBrQs.pb
    Nov 09, 2021 12:45:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 09, 2021 12:45:22 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 09, 2021 12:45:22 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 09, 2021 12:45:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 09, 2021 12:45:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-08_16_45_22-16058771963908001373?project=apache-beam-testing
    Nov 09, 2021 12:45:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-08_16_45_22-16058771963908001373
    Nov 09, 2021 12:45:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-08_16_45_22-16058771963908001373
    Nov 09, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-09T00:45:25.554Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 09, 2021 12:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T00:45:35.520Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 09, 2021 12:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T00:45:36.147Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 09, 2021 12:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T00:45:36.182Z: Expanding GroupByKey operations into optimizable parts.
    Nov 09, 2021 12:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T00:45:36.224Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 09, 2021 12:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T00:45:36.306Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 09, 2021 12:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T00:45:36.353Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 09, 2021 12:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T00:45:36.400Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 09, 2021 12:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T00:45:36.722Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 09, 2021 12:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T00:45:36.790Z: Starting 5 workers in us-central1-a...
    Nov 09, 2021 12:45:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T00:45:45.385Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 09, 2021 12:46:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T00:46:21.726Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 09, 2021 12:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T00:46:48.133Z: Workers have started successfully.
    Nov 09, 2021 12:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T00:46:48.168Z: Workers have started successfully.
    Nov 09, 2021 12:47:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T00:47:18.471Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 09, 2021 12:47:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T00:47:18.598Z: Cleaning up.
    Nov 09, 2021 12:47:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T00:47:18.692Z: Stopping worker pool...
    Nov 09, 2021 12:49:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T00:49:38.614Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 09, 2021 12:49:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-09T00:49:38.657Z: Worker pool stopped.
    Nov 09, 2021 12:49:44 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-08_16_45_22-16058771963908001373 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): ccd09791-faf0-4ee9-9aab-599b64a3173b and timestamp: 2021-11-09T00:49:44.777000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.394

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 09, 2021 12:49:44 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 38.833 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 26s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/rcz7xo6ehofo4

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2645

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2645/display/redirect?page=changes>

Changes:

[25622840+adude3141] [BEAM-13157] add regression test for hadoop configuration on


------------------------------------------
[...truncated 338.27 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ff91ec5af20ed36b5dc057a22e6225ed
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 08, 2021 6:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 08, 2021 6:44:51 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 08, 2021 6:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 08, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 08, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 08, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 08, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 08, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 08, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 08, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@731122468]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 08, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 08, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 08, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 08, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 08, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 08, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 08, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@16967185]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 08, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 08, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 08, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 08, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 08, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 08, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 08, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 08, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 08, 2021 6:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 08, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 08, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 08, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6915112470834048980.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-0pNb8-a4nZ_lBG6GuQ73Dwu-pJAm0Lj5F_aizVeXbXU.jar
    Nov 08, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 08, 2021 6:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 08, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <105624 bytes, hash 8f254fcd40dcc546f3a932c83d1dc3b37f62c2577c824e11136a2e0ceb4aae59> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-jyVPzUDcxUbzqTLIPR3Ds39iwld8gk4RE2ouDOtKrlk.pb
    Nov 08, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 08, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 08, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 08, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 08, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-08_10_45_03-17838473969301661030?project=apache-beam-testing
    Nov 08, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-08_10_45_03-17838473969301661030
    Nov 08, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-08_10_45_03-17838473969301661030
    Nov 08, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-08T18:45:08.072Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 08, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T18:45:15.078Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 08, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T18:45:15.706Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 08, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T18:45:15.747Z: Expanding GroupByKey operations into optimizable parts.
    Nov 08, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T18:45:15.774Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 08, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T18:45:15.834Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 08, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T18:45:15.857Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 08, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T18:45:15.901Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 08, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T18:45:16.244Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 08, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T18:45:16.320Z: Starting 5 workers in us-central1-c...
    Nov 08, 2021 6:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T18:45:28.524Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 08, 2021 6:46:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T18:46:00.265Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 08, 2021 6:46:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T18:46:25.016Z: Workers have started successfully.
    Nov 08, 2021 6:46:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T18:46:25.042Z: Workers have started successfully.
    Nov 08, 2021 6:46:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T18:46:54.173Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 08, 2021 6:46:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T18:46:54.354Z: Cleaning up.
    Nov 08, 2021 6:46:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T18:46:54.453Z: Stopping worker pool...
    Nov 08, 2021 6:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T18:49:31.336Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 08, 2021 6:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T18:49:31.388Z: Worker pool stopped.
    Nov 08, 2021 6:49:36 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-08_10_45_03-17838473969301661030 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8c44e11b-c10e-4007-ab85-92725624d6bd and timestamp: 2021-11-08T18:49:36.872000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.017

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 08, 2021 6:49:36 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 49.463 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 19s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/64vhativmerta

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2644

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2644/display/redirect>

Changes:


------------------------------------------
[...truncated 340.70 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 08, 2021 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 08, 2021 12:44:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 08, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 08, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 08, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 08, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 08, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 08, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 08, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@177135030]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 08, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 08, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 08, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 08, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 08, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 08, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 08, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 08, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 08, 2021 12:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 08, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 08, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 08, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7649285631177205275.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-2KLQez6_4NVbJ3z6QWPFA93MKgwBCUVwZ3sYKfdH11c.jar
    Nov 08, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.thrift/libthrift/0.14.1/85348a0c44c298bbec5ae747e67ae12e60b3aef6/libthrift-0.14.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/libthrift-0.14.1-WzUQ_nLm8HJeKc7269seq6zMxp15_E7VC2gWAKh2Z-w.jar
    Nov 08, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.tomcat/tomcat-annotations-api/8.5.46/56c67699de192c603afd6f029e80e5ff8d98e7e9/tomcat-annotations-api-8.5.46.jar to gs://temp-storage-for-perf-tests/loadtests/staging/tomcat-annotations-api-8.5.46-amtG0OaVhkRRTAyjZYs7B-YSOmgqIO4203lSQnNfq8M.jar
    Nov 08, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/8.5.46/5d686394334d143f48251827435ab086a161e75e/tomcat-embed-core-8.5.46.jar to gs://temp-storage-for-perf-tests/loadtests/staging/tomcat-embed-core-8.5.46-vl-FREjS7l1uADb-srT3ExYweaG2uaepdQjlWRetNcI.jar
    Nov 08, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 4 files newly uploaded in 0 seconds
    Nov 08, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 08, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <105625 bytes, hash e42bb81d6b1b100a509749dd45e2f4253972cd0421a6d642d19463a5cccd2721> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-5Cu4HWsbEApQl0ndReL0JTlyzQQhptZC0ZRjpczNJyE.pb
    Nov 08, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 08, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 08, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 08, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 08, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-08_04_45_08-11586057655158625769?project=apache-beam-testing
    Nov 08, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-08_04_45_08-11586057655158625769
    Nov 08, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-08_04_45_08-11586057655158625769
    Nov 08, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-08T12:45:11.675Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 08, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T12:45:24.890Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 08, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T12:45:25.619Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 08, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T12:45:25.661Z: Expanding GroupByKey operations into optimizable parts.
    Nov 08, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T12:45:25.698Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 08, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T12:45:25.784Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 08, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T12:45:25.814Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 08, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T12:45:25.863Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 08, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T12:45:26.256Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 08, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T12:45:26.346Z: Starting 5 workers in us-central1-c...
    Nov 08, 2021 12:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T12:45:55.272Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 08, 2021 12:46:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T12:46:11.913Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 08, 2021 12:46:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T12:46:38.205Z: Workers have started successfully.
    Nov 08, 2021 12:46:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T12:46:38.254Z: Workers have started successfully.
    Nov 08, 2021 12:47:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T12:47:09.116Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 08, 2021 12:47:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T12:47:09.364Z: Cleaning up.
    Nov 08, 2021 12:47:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T12:47:09.466Z: Stopping worker pool...
    Nov 08, 2021 12:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T12:49:23.043Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 08, 2021 12:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T12:49:23.109Z: Worker pool stopped.
    Nov 08, 2021 12:49:28 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-08_04_45_08-11586057655158625769 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d6635cc4-a400-447e-a975-3ba44c6ca6f0 and timestamp: 2021-11-08T12:49:28.822000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.377

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 08, 2021 12:49:28 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 39.38 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 11s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/4egig7hao5dmi

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2643

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2643/display/redirect>

Changes:


------------------------------------------
[...truncated 338.16 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ff91ec5af20ed36b5dc057a22e6225ed
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 08, 2021 6:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 08, 2021 6:44:51 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 08, 2021 6:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 08, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 08, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 08, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 08, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 08, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 08, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 08, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@731122468]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 08, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 08, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 08, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 08, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 08, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 08, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 08, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@16967185]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 08, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 08, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 08, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 08, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 08, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 08, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 08, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 08, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 08, 2021 6:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 08, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 08, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 08, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8125818951837391946.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-xjOJExd3MqzYUAqTDLQsyei9iOA_k2ivQyt_7cXKztM.jar
    Nov 08, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 08, 2021 6:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 08, 2021 6:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <105623 bytes, hash 5002e28094eaa570fae990189de1d7f7277316803bc0537554b340a496c15393> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-UALigJTqpXD66ZAYneHX9ydzFoA7wFN1VLNApJbBU5M.pb
    Nov 08, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 08, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 08, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 08, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 08, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-07_22_45_03-17957496668043169356?project=apache-beam-testing
    Nov 08, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-07_22_45_03-17957496668043169356
    Nov 08, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-07_22_45_03-17957496668043169356
    Nov 08, 2021 6:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-08T06:45:06.833Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 08, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T06:45:13.672Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 08, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T06:45:14.303Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 08, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T06:45:14.335Z: Expanding GroupByKey operations into optimizable parts.
    Nov 08, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T06:45:14.366Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 08, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T06:45:14.435Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 08, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T06:45:14.476Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 08, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T06:45:14.508Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 08, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T06:45:14.801Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 08, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T06:45:14.980Z: Starting 5 workers in us-central1-a...
    Nov 08, 2021 6:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T06:45:44.840Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 08, 2021 6:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T06:46:00.443Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 08, 2021 6:46:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T06:46:24.905Z: Workers have started successfully.
    Nov 08, 2021 6:46:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T06:46:24.933Z: Workers have started successfully.
    Nov 08, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T06:46:51.819Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 08, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T06:46:52.006Z: Cleaning up.
    Nov 08, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T06:46:52.086Z: Stopping worker pool...
    Nov 08, 2021 6:49:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T06:49:19.390Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 08, 2021 6:49:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T06:49:19.424Z: Worker pool stopped.
    Nov 08, 2021 6:49:26 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-07_22_45_03-17957496668043169356 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): cd098bc1-9bf3-4c35-b793-0c37df88e69b and timestamp: 2021-11-08T06:49:26.068000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.428

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 08, 2021 6:49:26 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 38.744 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 7s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/4qqfsrpkv2cko

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2642

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2642/display/redirect>

Changes:


------------------------------------------
[...truncated 338.77 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ff91ec5af20ed36b5dc057a22e6225ed
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 08, 2021 12:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 08, 2021 12:44:57 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 08, 2021 12:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 08, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 08, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 08, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 08, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 08, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 08, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 08, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 08, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 08, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 08, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 08, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 08, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 08, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 08, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1312856806]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 08, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 08, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 08, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 08, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 08, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 08, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 08, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 08, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 08, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 08, 2021 12:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 08, 2021 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 08, 2021 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8524176938575215154.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-SW0Epl8SLzNPC0m4qCyTk1xmleDOOQbWLuBCp7r7hik.jar
    Nov 08, 2021 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 08, 2021 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 08, 2021 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <105626 bytes, hash 0a47fe48a7a027b6f26986cbb24433b9ad3ccfd5c21230b3f273e04a69f485e0> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Ckf-SKegJ7byaYbLskQzua08z9XCEjCz8nPgSmn0heA.pb
    Nov 08, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 08, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 08, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 08, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 08, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-07_16_45_10-3557661608509117401?project=apache-beam-testing
    Nov 08, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-07_16_45_10-3557661608509117401
    Nov 08, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-07_16_45_10-3557661608509117401
    Nov 08, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-08T00:45:13.422Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 08, 2021 12:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T00:45:30.153Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 08, 2021 12:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T00:45:30.722Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 08, 2021 12:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T00:45:30.748Z: Expanding GroupByKey operations into optimizable parts.
    Nov 08, 2021 12:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T00:45:30.777Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 08, 2021 12:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T00:45:30.857Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 08, 2021 12:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T00:45:30.884Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 08, 2021 12:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T00:45:30.907Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 08, 2021 12:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T00:45:31.201Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 08, 2021 12:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T00:45:31.266Z: Starting 5 workers in us-central1-a...
    Nov 08, 2021 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T00:45:49.069Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 08, 2021 12:46:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T00:46:11.589Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 08, 2021 12:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T00:46:38.634Z: Workers have started successfully.
    Nov 08, 2021 12:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T00:46:38.662Z: Workers have started successfully.
    Nov 08, 2021 12:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T00:47:05.511Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 08, 2021 12:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T00:47:05.638Z: Cleaning up.
    Nov 08, 2021 12:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T00:47:05.706Z: Stopping worker pool...
    Nov 08, 2021 12:49:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T00:49:36.104Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 08, 2021 12:49:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-08T00:49:36.154Z: Worker pool stopped.
    Nov 08, 2021 12:50:05 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-07_16_45_10-3557661608509117401 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): bad01ac3-a0fb-4320-b35d-e9e07ad8d8d3 and timestamp: 2021-11-08T00:50:06.021000000Z:
                     Metric:                    Value:
                   read_time                      8.03
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 08, 2021 12:50:06 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 5 mins 13.236 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 41s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ructorqowlnsy

Build cache (/home/jenkins/.gradle/caches/build-cache-1) removing files not accessed on or after Mon Nov 01 00:44:28 UTC 2021.
Cache entries evicted. In-memory cache of /home/jenkins/.gradle/caches/journal-1/file-access.bin: Size{4276} MaxSize{4500}, CacheStats{hitCount=47, missCount=3423, loadSuccessCount=3424, loadExceptionCount=0, totalLoadTime=145703078, evictionCount=1} 
Performance may suffer from in-memory cache misses. Increase max heap size of Gradle build process to reduce cache misses.
Cache entries evicted. In-memory cache of /home/jenkins/.gradle/caches/journal-1/file-access.bin: Size{4500} MaxSize{4500}, CacheStats{hitCount=55, missCount=4097, loadSuccessCount=4098, loadExceptionCount=0, totalLoadTime=173660226, evictionCount=451} 
Performance may suffer from in-memory cache misses. Increase max heap size of Gradle build process to reduce cache misses.
Build cache (/home/jenkins/.gradle/caches/build-cache-1) cleaned up in 0.243 secs.
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2641

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2641/display/redirect>

Changes:


------------------------------------------
[...truncated 339.58 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ff91ec5af20ed36b5dc057a22e6225ed
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 07, 2021 6:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 07, 2021 6:44:51 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 07, 2021 6:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 07, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 07, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 07, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 07, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 07, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 07, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 07, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1017094781]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 07, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 07, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 07, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 07, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 07, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 07, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 07, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1696690835]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 07, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 07, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 07, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 07, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 07, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 07, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 07, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 07, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 07, 2021 6:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 07, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 07, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 07, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4999505376570295319.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-C4SsWYy90onsdo6NJcRojvOLTHmhOIRw6FX3RorbltM.jar
    Nov 07, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 07, 2021 6:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 07, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <105626 bytes, hash d00bc30a78c6827e3c864404e80e754a0707d6d5dfd639d0be31b21667fbd155> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-0AvDCnjGgn48hkQE6A51SgcH1tXf1jnQvjGyFmf70VU.pb
    Nov 07, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 07, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 07, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 07, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 07, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-07_10_45_03-12353851815036800672?project=apache-beam-testing
    Nov 07, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-07_10_45_03-12353851815036800672
    Nov 07, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-07_10_45_03-12353851815036800672
    Nov 07, 2021 6:45:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-07T18:45:09.329Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 07, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T18:45:14.922Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 07, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T18:45:15.532Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 07, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T18:45:15.568Z: Expanding GroupByKey operations into optimizable parts.
    Nov 07, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T18:45:15.597Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 07, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T18:45:15.680Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 07, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T18:45:15.712Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 07, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T18:45:15.741Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 07, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T18:45:16.039Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 07, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T18:45:16.108Z: Starting 5 workers in us-central1-a...
    Nov 07, 2021 6:45:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T18:45:37.680Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 07, 2021 6:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T18:45:59.016Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 07, 2021 6:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T18:46:23.770Z: Workers have started successfully.
    Nov 07, 2021 6:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T18:46:23.797Z: Workers have started successfully.
    Nov 07, 2021 6:46:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T18:46:48.616Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 07, 2021 6:46:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T18:46:48.752Z: Cleaning up.
    Nov 07, 2021 6:46:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T18:46:48.841Z: Stopping worker pool...
    Nov 07, 2021 6:49:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T18:49:08.659Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 07, 2021 6:49:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T18:49:08.700Z: Worker pool stopped.
    Nov 07, 2021 6:49:15 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-07_10_45_03-12353851815036800672 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): b3439045-7979-4714-ab11-315fa4bb559f and timestamp: 2021-11-07T18:49:15.456000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.006

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 07, 2021 6:49:15 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 28.461 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 4m 57s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/z6rgtfgyy47ja

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2640

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2640/display/redirect>

Changes:


------------------------------------------
[...truncated 338.96 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':',5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ff91ec5af20ed36b5dc057a22e6225ed
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 07, 2021 1:03:27 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 07, 2021 1:03:28 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 07, 2021 1:03:28 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 07, 2021 1:03:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 07, 2021 1:03:31 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 07, 2021 1:03:32 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 07, 2021 1:03:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 07, 2021 1:03:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 07, 2021 1:03:33 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 07, 2021 1:03:33 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 07, 2021 1:03:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 07, 2021 1:03:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 07, 2021 1:03:34 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 07, 2021 1:03:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 07, 2021 1:03:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 07, 2021 1:03:34 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 07, 2021 1:03:34 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@177135030]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 07, 2021 1:03:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 07, 2021 1:03:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 07, 2021 1:03:34 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 07, 2021 1:03:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 07, 2021 1:03:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 07, 2021 1:03:34 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 07, 2021 1:03:35 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 07, 2021 1:03:35 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 07, 2021 1:03:35 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 07, 2021 1:03:39 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 07, 2021 1:03:39 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 07, 2021 1:03:39 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4770515055090917533.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-XeuTRIoqrets4x7UVxXMKlT55nwSYiH86WpkGROmGsY.jar
    Nov 07, 2021 1:03:40 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 07, 2021 1:03:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 07, 2021 1:03:40 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <105626 bytes, hash 357525418d8023419b4f43831d234ca39d97c79fe53c8fe815e1c458c09b7899> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-NXUlQY2AI0GbT0ODHSNMo52Xx5_lPI_oFeHEWMCbeJk.pb
    Nov 07, 2021 1:03:42 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 07, 2021 1:03:43 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 07, 2021 1:03:43 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 07, 2021 1:03:43 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 07, 2021 1:03:43 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-07_05_03_43-7817876710000170894?project=apache-beam-testing
    Nov 07, 2021 1:03:43 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-07_05_03_43-7817876710000170894
    Nov 07, 2021 1:03:43 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-07_05_03_43-7817876710000170894
    Nov 07, 2021 1:03:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-07T13:03:46.360Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 07, 2021 1:03:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T13:03:52.329Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 07, 2021 1:03:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T13:03:53.005Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 07, 2021 1:03:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T13:03:53.044Z: Expanding GroupByKey operations into optimizable parts.
    Nov 07, 2021 1:03:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T13:03:53.071Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 07, 2021 1:03:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T13:03:53.147Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 07, 2021 1:03:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T13:03:53.175Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 07, 2021 1:03:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T13:03:53.208Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 07, 2021 1:03:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T13:03:53.554Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 07, 2021 1:03:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T13:03:53.628Z: Starting 5 workers in us-central1-c...
    Nov 07, 2021 1:04:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T13:04:05.939Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 07, 2021 1:04:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T13:04:36.728Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 07, 2021 1:05:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T13:05:02.010Z: Workers have started successfully.
    Nov 07, 2021 1:05:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T13:05:02.040Z: Workers have started successfully.
    Nov 07, 2021 1:05:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T13:05:29.815Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 07, 2021 1:05:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T13:05:29.974Z: Cleaning up.
    Nov 07, 2021 1:05:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T13:05:30.054Z: Stopping worker pool...
    Nov 07, 2021 1:07:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T13:07:53.625Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 07, 2021 1:07:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T13:07:53.661Z: Worker pool stopped.
    Nov 07, 2021 1:07:59 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-07_05_03_43-7817876710000170894 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4afb3c4f-2fe1-4ca5-924d-c17b2dd450d1 and timestamp: 2021-11-07T13:07:59.367000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.689

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 07, 2021 1:07:59 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.041 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 36.272 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 14s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/vhhwy3knrbs74

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2639

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2639/display/redirect>

Changes:


------------------------------------------
[...truncated 338.26 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ff91ec5af20ed36b5dc057a22e6225ed
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 07, 2021 6:44:49 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 07, 2021 6:44:50 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 07, 2021 6:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 07, 2021 6:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 07, 2021 6:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 07, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 07, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 07, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 07, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 07, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1017094781]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 07, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 07, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 07, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 07, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 07, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 07, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 07, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1696690835]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 07, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 07, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 07, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 07, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 07, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 07, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 07, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 07, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 07, 2021 6:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 07, 2021 6:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 07, 2021 6:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 07, 2021 6:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3101771753485352471.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-X2vlm561Bsr13H2f9iZpikgzdMT1XHIbw3mQ3-W0Srk.jar
    Nov 07, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 07, 2021 6:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 07, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <105626 bytes, hash 1883266f148612d57aecf582a570fcee6f5a60d3aadf61a21dec4dfb79b2a452> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-GIMmbxSGEtV67PWCpXD87m9aYNOq32GiHexN-3mypFI.pb
    Nov 07, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 07, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 07, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 07, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 07, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-06_23_45_02-371593840203376660?project=apache-beam-testing
    Nov 07, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-06_23_45_02-371593840203376660
    Nov 07, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-06_23_45_02-371593840203376660
    Nov 07, 2021 6:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-07T06:45:06.077Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 07, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T06:45:14.715Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 07, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T06:45:15.353Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 07, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T06:45:15.382Z: Expanding GroupByKey operations into optimizable parts.
    Nov 07, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T06:45:15.407Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 07, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T06:45:15.479Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 07, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T06:45:15.505Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 07, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T06:45:15.537Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 07, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T06:45:15.834Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 07, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T06:45:15.911Z: Starting 5 workers in us-central1-c...
    Nov 07, 2021 6:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T06:45:27.777Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 07, 2021 6:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T06:46:00.731Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 07, 2021 6:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T06:46:26.371Z: Workers have started successfully.
    Nov 07, 2021 6:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T06:46:26.422Z: Workers have started successfully.
    Nov 07, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T06:46:54.680Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 07, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T06:46:54.819Z: Cleaning up.
    Nov 07, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T06:46:54.907Z: Stopping worker pool...
    Nov 07, 2021 6:49:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T06:49:13.326Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 07, 2021 6:49:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T06:49:13.364Z: Worker pool stopped.
    Nov 07, 2021 6:49:18 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-06_23_45_02-371593840203376660 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): a2073580-cc6f-4cb3-baea-db1f328b32a8 and timestamp: 2021-11-07T06:49:18.837000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.138

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 07, 2021 6:49:18 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 32.618 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 1s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/rfv4tv4ntoohq

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2638

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2638/display/redirect>

Changes:


------------------------------------------
[...truncated 338.38 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ff91ec5af20ed36b5dc057a22e6225ed
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 07, 2021 12:44:49 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 07, 2021 12:44:50 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 07, 2021 12:44:50 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 07, 2021 12:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 07, 2021 12:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 07, 2021 12:44:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 07, 2021 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 07, 2021 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 07, 2021 12:44:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 07, 2021 12:44:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 07, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 07, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 07, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 07, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 07, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 07, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 07, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1312856806]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 07, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 07, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 07, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 07, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 07, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 07, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 07, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 07, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 07, 2021 12:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 07, 2021 12:44:58 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 07, 2021 12:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 07, 2021 12:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4087408473808050165.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-v_A6YT-GuhLKfYW6Q3qCM74IS6qNWSl5rbfaKgK-wJU.jar
    Nov 07, 2021 12:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 07, 2021 12:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 07, 2021 12:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <105625 bytes, hash 672f0ef2d0c09b5a107342f12022b4bdd9b1104f60d6b360cd6d5d900e8c36e3> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Zy8O8tDAm1oQc0LxICK0vdmxEE9g1rNgzW1dkA6MNuM.pb
    Nov 07, 2021 12:45:01 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 07, 2021 12:45:01 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 07, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 07, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 07, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-06_17_45_02-11167703340113691513?project=apache-beam-testing
    Nov 07, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-06_17_45_02-11167703340113691513
    Nov 07, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-06_17_45_02-11167703340113691513
    Nov 07, 2021 12:45:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-07T00:45:05.174Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 07, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T00:45:11.088Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 07, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T00:45:11.741Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 07, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T00:45:11.778Z: Expanding GroupByKey operations into optimizable parts.
    Nov 07, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T00:45:11.805Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 07, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T00:45:11.865Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 07, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T00:45:11.892Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 07, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T00:45:11.924Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 07, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T00:45:12.237Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 07, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T00:45:12.312Z: Starting 5 workers in us-central1-a...
    Nov 07, 2021 12:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T00:45:35.196Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 07, 2021 12:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T00:45:57.865Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Nov 07, 2021 12:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T00:45:57.896Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Nov 07, 2021 12:46:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T00:46:08.225Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 07, 2021 12:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T00:46:24.218Z: Workers have started successfully.
    Nov 07, 2021 12:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T00:46:24.239Z: Workers have started successfully.
    Nov 07, 2021 12:46:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T00:46:52.758Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 07, 2021 12:46:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T00:46:52.920Z: Cleaning up.
    Nov 07, 2021 12:46:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T00:46:53.011Z: Stopping worker pool...
    Nov 07, 2021 12:49:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T00:49:14.784Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 07, 2021 12:49:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-07T00:49:14.825Z: Worker pool stopped.
    Nov 07, 2021 12:49:24 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-06_17_45_02-11167703340113691513 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 87661e17-3e7d-4170-8400-623e181b5e55 and timestamp: 2021-11-07T00:49:24.971000000Z:
                     Metric:                    Value:
                   read_time                     8.235
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 07, 2021 12:49:25 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 38.952 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 7s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/yosjaqzuiupzy

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2637

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2637/display/redirect?page=changes>

Changes:

[relax] remove log statement


------------------------------------------
[...truncated 339.78 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ff91ec5af20ed36b5dc057a22e6225ed
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 06, 2021 6:46:45 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 06, 2021 6:46:46 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 06, 2021 6:46:47 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 06, 2021 6:46:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 06, 2021 6:46:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 06, 2021 6:46:51 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 06, 2021 6:46:51 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 06, 2021 6:46:51 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 06, 2021 6:46:51 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 06, 2021 6:46:51 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 06, 2021 6:46:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 06, 2021 6:46:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 06, 2021 6:46:52 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 06, 2021 6:46:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 06, 2021 6:46:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 06, 2021 6:46:52 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 06, 2021 6:46:52 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@177135030]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 06, 2021 6:46:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 06, 2021 6:46:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 06, 2021 6:46:52 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 06, 2021 6:46:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 06, 2021 6:46:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 06, 2021 6:46:52 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 06, 2021 6:46:53 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 06, 2021 6:46:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 06, 2021 6:46:53 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 06, 2021 6:46:56 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 06, 2021 6:46:56 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 06, 2021 6:46:56 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3805029272753326768.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test--Ys0XDcZ1Ygs5E6EejDof1Ai8w_LzZWWU6eJON3RLpk.jar
    Nov 06, 2021 6:46:56 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 06, 2021 6:46:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 06, 2021 6:46:56 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <105625 bytes, hash 6502a9a33647f9284b6516aab98c3c912e3abeb7d797195e76c33ad20244b352> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-ZQKpozZH-ShLZRaquYw8kS46vrfXlxledsM60gJEs1I.pb
    Nov 06, 2021 6:46:59 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 06, 2021 6:46:59 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 06, 2021 6:46:59 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 06, 2021 6:46:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 06, 2021 6:47:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-06_11_46_59-14360946181417219534?project=apache-beam-testing
    Nov 06, 2021 6:47:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-06_11_46_59-14360946181417219534
    Nov 06, 2021 6:47:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-06_11_46_59-14360946181417219534
    Nov 06, 2021 6:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-06T18:47:02.933Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 06, 2021 6:47:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T18:47:09.916Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 06, 2021 6:47:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T18:47:10.526Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 06, 2021 6:47:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T18:47:10.564Z: Expanding GroupByKey operations into optimizable parts.
    Nov 06, 2021 6:47:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T18:47:10.583Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 06, 2021 6:47:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T18:47:10.648Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 06, 2021 6:47:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T18:47:10.672Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 06, 2021 6:47:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T18:47:10.695Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 06, 2021 6:47:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T18:47:10.995Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 06, 2021 6:47:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T18:47:11.059Z: Starting 5 workers in us-central1-c...
    Nov 06, 2021 6:47:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T18:47:42.774Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 06, 2021 6:47:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T18:47:54.807Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 06, 2021 6:48:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T18:48:20.955Z: Workers have started successfully.
    Nov 06, 2021 6:48:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T18:48:20.980Z: Workers have started successfully.
    Nov 06, 2021 6:48:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T18:48:50.770Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 06, 2021 6:48:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T18:48:50.893Z: Cleaning up.
    Nov 06, 2021 6:48:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T18:48:50.948Z: Stopping worker pool...
    Nov 06, 2021 6:51:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T18:51:18.420Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 06, 2021 6:51:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T18:51:18.469Z: Worker pool stopped.
    Nov 06, 2021 6:51:24 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-06_11_46_59-14360946181417219534 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 7e2b905c-e801-416c-a606-4899c22b91f6 and timestamp: 2021-11-06T18:51:24.857000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.658

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 06, 2021 6:51:24 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 43.762 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 6s
152 actionable tasks: 97 executed, 55 from cache

Publishing build scan...
https://gradle.com/s/3vvgceiu7kygm

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2636

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2636/display/redirect>

Changes:


------------------------------------------
[...truncated 339.43 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 20c75592c3645c6c1287fc9639d3b3c8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 06, 2021 12:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 06, 2021 12:44:53 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 06, 2021 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 06, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 06, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 06, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 06, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 06, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 06, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 06, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 06, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 06, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 06, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 06, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 06, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 06, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 06, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@177135030]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 06, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 06, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 06, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 06, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 06, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 06, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 06, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 06, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 06, 2021 12:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 06, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 06, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 06, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5067046859581493352.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-NAS_ADyc_2TG76d8-kvMQU1B_79XZYtFPY5wHcELsJk.jar
    Nov 06, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 06, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 06, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <105625 bytes, hash f26a0b8c5bf239b87b4384bfd836eb17780262935f00e0095dbef3c9c12a8650> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-8moLjFvyObh7Q4S_2DbrF3gCYpNfAOAJXb7zycEqhlA.pb
    Nov 06, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 06, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 06, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 06, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 06, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-06_05_45_05-16131663518692772936?project=apache-beam-testing
    Nov 06, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-06_05_45_05-16131663518692772936
    Nov 06, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-06_05_45_05-16131663518692772936
    Nov 06, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-06T12:45:08.787Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 06, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T12:45:18.626Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 06, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T12:45:19.172Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 06, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T12:45:19.213Z: Expanding GroupByKey operations into optimizable parts.
    Nov 06, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T12:45:19.242Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 06, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T12:45:19.323Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 06, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T12:45:19.359Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 06, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T12:45:19.403Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 06, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T12:45:19.750Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 06, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T12:45:19.817Z: Starting 5 workers in us-central1-a...
    Nov 06, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T12:45:41.404Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 06, 2021 12:46:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T12:46:03.776Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Nov 06, 2021 12:46:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T12:46:03.812Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Nov 06, 2021 12:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T12:46:24.377Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 06, 2021 12:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T12:46:29.863Z: Workers have started successfully.
    Nov 06, 2021 12:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T12:46:29.882Z: Workers have started successfully.
    Nov 06, 2021 12:46:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T12:46:57.478Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 06, 2021 12:46:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T12:46:57.630Z: Cleaning up.
    Nov 06, 2021 12:46:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T12:46:57.713Z: Stopping worker pool...
    Nov 06, 2021 12:49:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T12:49:32.556Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 06, 2021 12:49:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T12:49:32.600Z: Worker pool stopped.
    Nov 06, 2021 12:49:38 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-06_05_45_05-16131663518692772936 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c45a2ac7-84da-416f-b184-36621621b32a and timestamp: 2021-11-06T12:49:38.890000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.116

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 06, 2021 12:49:38 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 50.703 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 21s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/qfp6xcs3wupjw

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2635

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2635/display/redirect?page=changes>

Changes:

[joseinigo] [BEAM-13080] Fix number of default keys

[joseinigo] [BEAM-13080] Fix number of default keys


------------------------------------------
[...truncated 339.52 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 20c75592c3645c6c1287fc9639d3b3c8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 06, 2021 6:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 06, 2021 6:45:02 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 06, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 06, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 06, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 06, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 06, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 06, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 06, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 06, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@731122468]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 06, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 06, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 06, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 06, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 06, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 06, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 06, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1266073646]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 06, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 06, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 06, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 06, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 06, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 06, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 06, 2021 6:45:10 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 06, 2021 6:45:10 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 06, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 06, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 06, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 06, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4584843402582179087.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-DXrEdkFeYO7eLEqd4ML1yOsEOYagsIxZf-F_Zjey1Mw.jar
    Nov 06, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 1 seconds
    Nov 06, 2021 6:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 06, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <105623 bytes, hash 1ac6feceb75e351dc7d668f47b40c1db881980dc58077d189b735e095ddf1f85> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Gsb-zrdeNR3H1mj0e0DB24gZgNxYB30Ym3NeCV3fH4U.pb
    Nov 06, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 06, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 06, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 06, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 06, 2021 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-05_23_45_18-10012129750071015529?project=apache-beam-testing
    Nov 06, 2021 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-05_23_45_18-10012129750071015529
    Nov 06, 2021 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-05_23_45_18-10012129750071015529
    Nov 06, 2021 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-06T06:45:21.987Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 06, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T06:45:28.959Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 06, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T06:45:29.535Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 06, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T06:45:29.571Z: Expanding GroupByKey operations into optimizable parts.
    Nov 06, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T06:45:29.599Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 06, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T06:45:29.682Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 06, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T06:45:29.699Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 06, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T06:45:29.720Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 06, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T06:45:30.008Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 06, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T06:45:30.060Z: Starting 5 workers in us-central1-c...
    Nov 06, 2021 6:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T06:45:47.067Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 06, 2021 6:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T06:46:21.327Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 06, 2021 6:46:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T06:46:46.614Z: Workers have started successfully.
    Nov 06, 2021 6:46:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T06:46:46.640Z: Workers have started successfully.
    Nov 06, 2021 6:47:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T06:47:17.263Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 06, 2021 6:47:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T06:47:17.404Z: Cleaning up.
    Nov 06, 2021 6:47:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T06:47:17.471Z: Stopping worker pool...
    Nov 06, 2021 6:49:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T06:49:42.729Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 06, 2021 6:49:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T06:49:42.759Z: Worker pool stopped.
    Nov 06, 2021 6:49:48 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-05_23_45_18-10012129750071015529 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): a76cabe0-e4d9-463d-b739-4989ca0ea5f8 and timestamp: 2021-11-06T06:49:48.731000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.343

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 06, 2021 6:49:48 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 51.408 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 29s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/w4ikrjqjlpdi4

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2634

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2634/display/redirect?page=changes>

Changes:

[noreply] [BEAM-11217] Metrics Query filtering for DoFn metrics. (#15887)

[noreply] [Go SDK] Go SDK Exits Experimental (#15894)


------------------------------------------
[...truncated 339.33 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 20c75592c3645c6c1287fc9639d3b3c8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 06, 2021 12:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 06, 2021 12:44:56 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 06, 2021 12:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 06, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 06, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 06, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 06, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 06, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 06, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 06, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@731122468]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 06, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 06, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 06, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 06, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 06, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 06, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 06, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1266073646]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 06, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 06, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 06, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 06, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 06, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 06, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 06, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 06, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 06, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 06, 2021 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 06, 2021 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 06, 2021 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4906749320263215557.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-9Cnzqnpiz3fyN-GN-EhnccRPQ1uvvkSdUJeBw-6q4So.jar
    Nov 06, 2021 12:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 06, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 06, 2021 12:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <105623 bytes, hash f5a3cfac3b8ef634dc909bd8848ab9ff5d81a9b6ac0d006c43399011b0624f37> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-9aPPrDuO9jTckJvYhIq5_12BqbasDQBsQzmQEbBiTzc.pb
    Nov 06, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 06, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 06, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 06, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 06, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-05_17_45_09-16047079231059800538?project=apache-beam-testing
    Nov 06, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-05_17_45_09-16047079231059800538
    Nov 06, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-05_17_45_09-16047079231059800538
    Nov 06, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-06T00:45:12.530Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 06, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T00:45:18.557Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 06, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T00:45:19.164Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 06, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T00:45:19.219Z: Expanding GroupByKey operations into optimizable parts.
    Nov 06, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T00:45:19.255Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 06, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T00:45:19.334Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 06, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T00:45:19.375Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 06, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T00:45:19.393Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 06, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T00:45:19.736Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 06, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T00:45:19.812Z: Starting 5 workers in us-central1-a...
    Nov 06, 2021 12:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T00:45:32.058Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 06, 2021 12:46:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T00:46:05.514Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 06, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T00:46:29.953Z: Workers have started successfully.
    Nov 06, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T00:46:29.986Z: Workers have started successfully.
    Nov 06, 2021 12:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T00:46:58.895Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 06, 2021 12:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T00:46:59.040Z: Cleaning up.
    Nov 06, 2021 12:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T00:46:59.118Z: Stopping worker pool...
    Nov 06, 2021 12:49:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T00:49:19.648Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 06, 2021 12:49:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-06T00:49:19.702Z: Worker pool stopped.
    Nov 06, 2021 12:49:26 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-05_17_45_09-16047079231059800538 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c50cc425-93a6-4533-88c0-1cf8b4d2f365 and timestamp: 2021-11-06T00:49:26.850000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.786

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 06, 2021 12:49:26 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 35.312 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 5s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/thaznqa6plolc

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2633

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2633/display/redirect?page=changes>

Changes:

[noreply] [adhoc] Speedup slow tests for AWS IO modules (#15899)

[noreply] [BEAM-12566] Implement set_axis for DataFrame and Series (#15773)

[noreply] [BEAM-13192] Fix buggy retry tests for AWS SnsIO (#15910)


------------------------------------------
[...truncated 338.67 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 20c75592c3645c6c1287fc9639d3b3c8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 05, 2021 6:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 05, 2021 6:44:52 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 05, 2021 6:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 05, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 05, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 05, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 05, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 05, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 05, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 05, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@731122468]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 05, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 05, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 05, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 05, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 05, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 05, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 05, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@16967185]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 05, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 05, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 05, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 05, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 05, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 05, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 05, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 05, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 05, 2021 6:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 05, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 05, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 05, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4430344775253848586.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-jzmnFm7LIVzrG5wY58-AC93KqH7aEm_95qVVWmRdP4Q.jar
    Nov 05, 2021 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 05, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 05, 2021 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <105623 bytes, hash 0dbf21d0ace1a05062fed37364e926c0642be74847df92c4aec587ecedd96b0c> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Db8h0KzhoFBi_tNzZOkmwGQr50hH35LErsWH7O3Zaww.pb
    Nov 05, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 05, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 05, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 05, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 05, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-05_11_45_04-17499416600177864913?project=apache-beam-testing
    Nov 05, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-05_11_45_04-17499416600177864913
    Nov 05, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-05_11_45_04-17499416600177864913
    Nov 05, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-05T18:45:10.196Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 05, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T18:45:17.569Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 05, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T18:45:18.034Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 05, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T18:45:18.063Z: Expanding GroupByKey operations into optimizable parts.
    Nov 05, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T18:45:18.090Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 05, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T18:45:18.180Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 05, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T18:45:18.215Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 05, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T18:45:18.240Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 05, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T18:45:18.523Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 05, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T18:45:18.593Z: Starting 5 workers in us-central1-c...
    Nov 05, 2021 6:45:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T18:45:43.261Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 05, 2021 6:46:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T18:46:03.055Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 05, 2021 6:46:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T18:46:27.549Z: Workers have started successfully.
    Nov 05, 2021 6:46:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T18:46:27.583Z: Workers have started successfully.
    Nov 05, 2021 6:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T18:46:56.093Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 05, 2021 6:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T18:46:56.292Z: Cleaning up.
    Nov 05, 2021 6:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T18:46:56.375Z: Stopping worker pool...
    Nov 05, 2021 6:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T18:49:22.970Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 05, 2021 6:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T18:49:23.111Z: Worker pool stopped.
    Nov 05, 2021 6:49:30 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-05_11_45_04-17499416600177864913 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6522a502-db16-4089-beb3-5f67fea7087e and timestamp: 2021-11-05T18:49:30.220000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.794

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 05, 2021 6:49:30 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 41.8 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 10s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/3io6rzg4qqtdu

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2632

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2632/display/redirect>

Changes:


------------------------------------------
[...truncated 337.07 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 20c75592c3645c6c1287fc9639d3b3c8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 05, 2021 12:44:37 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 05, 2021 12:44:38 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 05, 2021 12:44:39 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 05, 2021 12:44:41 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 05, 2021 12:44:41 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 05, 2021 12:44:42 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 05, 2021 12:44:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 05, 2021 12:44:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 05, 2021 12:44:42 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 05, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 05, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 05, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 05, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 05, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 05, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 05, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 05, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1312856806]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 05, 2021 12:44:44 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 05, 2021 12:44:44 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 05, 2021 12:44:44 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 05, 2021 12:44:44 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 05, 2021 12:44:44 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 05, 2021 12:44:44 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 05, 2021 12:44:44 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 05, 2021 12:44:44 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 05, 2021 12:44:44 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 05, 2021 12:44:48 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 05, 2021 12:44:48 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 05, 2021 12:44:48 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6768594134699539699.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-w78YeX2bNWRZ0STP1nHnFH--1i3nXmqVbrT-qobXJd8.jar
    Nov 05, 2021 12:44:48 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 05, 2021 12:44:48 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 05, 2021 12:44:48 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <105625 bytes, hash e931494744edb7ab424d33993651daeaa0a13dbddc6c8479164f30b436d5b977> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-6TFJR0Ttt6tCTTOZNlHa6qChPb3cbIR5Fk8wtDbVuXc.pb
    Nov 05, 2021 12:44:51 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 05, 2021 12:44:52 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 05, 2021 12:44:52 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 05, 2021 12:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 05, 2021 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-05_05_44_52-6313112018093178519?project=apache-beam-testing
    Nov 05, 2021 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-05_05_44_52-6313112018093178519
    Nov 05, 2021 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-05_05_44_52-6313112018093178519
    Nov 05, 2021 12:44:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-05T12:44:55.809Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 05, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T12:45:00.730Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 05, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T12:45:01.442Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 05, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T12:45:01.481Z: Expanding GroupByKey operations into optimizable parts.
    Nov 05, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T12:45:01.512Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 05, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T12:45:01.576Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 05, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T12:45:01.614Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 05, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T12:45:01.644Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 05, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T12:45:01.953Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 05, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T12:45:02.021Z: Starting 5 workers in us-central1-a...
    Nov 05, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T12:45:20.100Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 05, 2021 12:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T12:45:56.044Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 05, 2021 12:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T12:46:22.418Z: Workers have started successfully.
    Nov 05, 2021 12:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T12:46:22.447Z: Workers have started successfully.
    Nov 05, 2021 12:46:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T12:46:47.967Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 05, 2021 12:46:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T12:46:48.140Z: Cleaning up.
    Nov 05, 2021 12:46:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T12:46:48.230Z: Stopping worker pool...
    Nov 05, 2021 12:49:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T12:49:08.962Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 05, 2021 12:49:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T12:49:08.988Z: Worker pool stopped.
    Nov 05, 2021 12:49:16 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-05_05_44_52-6313112018093178519 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 708bc0a3-116a-44be-b339-b52bfc02aab4 and timestamp: 2021-11-05T12:49:16.462000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.946

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 05, 2021 12:49:16 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.007 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.005 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 42.506 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 4m 58s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/26ui2r6zab35q

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2631

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2631/display/redirect>

Changes:


------------------------------------------
[...truncated 340.22 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 20c75592c3645c6c1287fc9639d3b3c8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 05, 2021 6:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 05, 2021 6:44:58 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 05, 2021 6:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 05, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 05, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 05, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 05, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 05, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 05, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 05, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 05, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 05, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 05, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 05, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 05, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 05, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 05, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1312856806]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 05, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 05, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 05, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 05, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 05, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 05, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 05, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 05, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 05, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 05, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 05, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 05, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7296997174002317624.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Jmd4DeR2C3OdcHI_SasRDBif_FmO9jOfAiH-T5p9xXA.jar
    Nov 05, 2021 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 251 files cached, 1 files newly uploaded in 0 seconds
    Nov 05, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 05, 2021 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <105625 bytes, hash 71bdef2a01c407783ab806b648d5e8f589d4854850cc9ca88b5a941294b8de2f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-cb3vKgHEB3g6uAa2SNXo9YnUhUhQzJyoi1qUEpS43i8.pb
    Nov 05, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 05, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 05, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 05, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 05, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-04_23_45_10-10896633104599913044?project=apache-beam-testing
    Nov 05, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-04_23_45_10-10896633104599913044
    Nov 05, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-04_23_45_10-10896633104599913044
    Nov 05, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-05T06:45:14.844Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 05, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T06:45:22.092Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 05, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T06:45:22.763Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 05, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T06:45:22.800Z: Expanding GroupByKey operations into optimizable parts.
    Nov 05, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T06:45:22.828Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 05, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T06:45:22.898Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 05, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T06:45:22.918Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 05, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T06:45:22.949Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 05, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T06:45:23.240Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 05, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T06:45:23.312Z: Starting 5 workers in us-central1-c...
    Nov 05, 2021 6:45:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T06:45:55.734Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 05, 2021 6:46:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T06:46:10.627Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 05, 2021 6:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T06:46:36.355Z: Workers have started successfully.
    Nov 05, 2021 6:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T06:46:36.450Z: Workers have started successfully.
    Nov 05, 2021 6:47:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T06:47:07.787Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 05, 2021 6:47:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T06:47:08.130Z: Cleaning up.
    Nov 05, 2021 6:47:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T06:47:08.212Z: Stopping worker pool...
    Nov 05, 2021 6:49:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T06:49:33.790Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 05, 2021 6:49:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T06:49:33.856Z: Worker pool stopped.
    Nov 05, 2021 6:49:39 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-04_23_45_10-10896633104599913044 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 80635b4d-160c-4735-8218-6224e09255cf and timestamp: 2021-11-05T06:49:39.581000000Z:
                     Metric:                    Value:
                   read_time                     7.321
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 05, 2021 6:49:39 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.043 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 46.217 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 20s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/bvncbf57wfc6k

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2630

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2630/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13081] Fixes a compatible issue of decoding null-value bitmap


------------------------------------------
[...truncated 343.17 KB...]
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 20c75592c3645c6c1287fc9639d3b3c8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 05, 2021 12:45:35 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 05, 2021 12:45:36 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 05, 2021 12:45:37 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 05, 2021 12:45:39 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 05, 2021 12:45:39 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 05, 2021 12:45:40 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 05, 2021 12:45:40 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 05, 2021 12:45:40 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 05, 2021 12:45:40 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 05, 2021 12:45:41 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@731122468]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 05, 2021 12:45:41 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 05, 2021 12:45:41 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 05, 2021 12:45:41 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 05, 2021 12:45:41 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 05, 2021 12:45:41 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 05, 2021 12:45:41 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 05, 2021 12:45:41 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@16967185]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 05, 2021 12:45:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 05, 2021 12:45:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 05, 2021 12:45:42 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 05, 2021 12:45:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 05, 2021 12:45:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 05, 2021 12:45:42 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 05, 2021 12:45:42 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 05, 2021 12:45:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 05, 2021 12:45:42 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 05, 2021 12:45:47 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 05, 2021 12:45:47 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 05, 2021 12:45:47 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1337804684323672062.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-AAx2IPWbm2T1FBUxMwGNuN2NT9wsccf3GMhVyaIKZGY.jar
    Nov 05, 2021 12:45:48 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mongodb/mongo-java-driver/3.12.7/1f45c6a397feeb46da75425619333d1cc6f90f78/mongo-java-driver-3.12.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/mongo-java-driver-3.12.7-D_zgBJWNb9mzmuetJ37a0X9XtpcfSGsXYpxe6eE8Tao.jar
    Nov 05, 2021 12:45:48 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 250 files cached, 2 files newly uploaded in 0 seconds
    Nov 05, 2021 12:45:48 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 05, 2021 12:45:48 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <105623 bytes, hash 3082eff3ffd12fd21808029fd67302115297ca878ea78d8e7495204741dbc9d1> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-MILv8__RL9IYCAKf1nMCEVKXyoeOp42OdJUgR0HbydE.pb
    Nov 05, 2021 12:45:50 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 05, 2021 12:45:51 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 05, 2021 12:45:51 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 05, 2021 12:45:51 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 05, 2021 12:45:51 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-04_17_45_51-12504504561100360242?project=apache-beam-testing
    Nov 05, 2021 12:45:51 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-04_17_45_51-12504504561100360242
    Nov 05, 2021 12:45:51 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-04_17_45_51-12504504561100360242
    Nov 05, 2021 12:45:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-05T00:45:54.816Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 05, 2021 12:46:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T00:46:00.774Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 05, 2021 12:46:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T00:46:01.458Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 05, 2021 12:46:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T00:46:01.489Z: Expanding GroupByKey operations into optimizable parts.
    Nov 05, 2021 12:46:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T00:46:01.517Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 05, 2021 12:46:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T00:46:01.587Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 05, 2021 12:46:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T00:46:01.605Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 05, 2021 12:46:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T00:46:01.640Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 05, 2021 12:46:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T00:46:01.966Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 05, 2021 12:46:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T00:46:02.033Z: Starting 5 workers in us-central1-a...
    Nov 05, 2021 12:46:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T00:46:17.639Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 05, 2021 12:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T00:46:54.781Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 05, 2021 12:47:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T00:47:23.801Z: Workers have started successfully.
    Nov 05, 2021 12:47:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T00:47:23.832Z: Workers have started successfully.
    Nov 05, 2021 12:47:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T00:47:52.535Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 05, 2021 12:47:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T00:47:52.663Z: Cleaning up.
    Nov 05, 2021 12:47:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T00:47:52.735Z: Stopping worker pool...
    Nov 05, 2021 12:50:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T00:50:16.613Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 05, 2021 12:50:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-05T00:50:16.660Z: Worker pool stopped.
    Nov 05, 2021 12:50:22 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-04_17_45_51-12504504561100360242 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 55b87c17-43b9-400b-a48c-b0a9a0724bb1 and timestamp: 2021-11-05T00:50:22.428000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.792

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 05, 2021 12:50:22 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 50.045 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 4s
152 actionable tasks: 101 executed, 51 from cache

Publishing build scan...
https://gradle.com/s/kkt6z3zu7vkj4

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2629

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2629/display/redirect?page=changes>

Changes:

[heejong] [BEAM-13021] Deduplicate Python artifact not only by hash but also by

[heejong] add test

[heejong] rename variables

[Luke Cwik] [BEAM-3811] Code clean-up of the CancelleableQueue to not throw

[heejong] rearrange comparison order

[Luke Cwik] [BEAM-13164] Address most of a race condition between instantiation and

[Robert Bradshaw] Defer filesToStage construction until after full jar resolution.

[heejong] s/pathes/paths/

[noreply] Merge pull request #15835: [BEAM-11205] Google Libraries BOM 24.0.0 and

[noreply] Merge pull request #15810: [BEAM-2791] Support low-latency StorageApi


------------------------------------------
[...truncated 364.95 KB...]
Successfully started process 'Gradle Test Executor 5'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 04, 2021 7:02:38 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 04, 2021 7:02:39 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 04, 2021 7:02:40 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 251 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 04, 2021 7:02:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 04, 2021 7:02:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 7:02:43 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 04, 2021 7:02:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 04, 2021 7:02:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 7:02:44 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 04, 2021 7:02:44 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1498106782]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 04, 2021 7:02:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 04, 2021 7:02:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 7:02:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 04, 2021 7:02:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 04, 2021 7:02:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 7:02:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 04, 2021 7:02:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@177135030]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 04, 2021 7:02:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 04, 2021 7:02:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 7:02:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 04, 2021 7:02:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 04, 2021 7:02:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 7:02:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 04, 2021 7:02:46 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 04, 2021 7:02:46 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 04, 2021 7:02:46 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 04, 2021 7:02:50 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 252 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 04, 2021 7:02:50 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 04, 2021 7:02:51 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1899797537501590485.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-LJ-awOhJkzjtCzw4UWvyoIMJAvc6BgLNZq68mAh2154.jar
    Nov 04, 2021 7:02:51 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT-nJooQBOmf91zLQQ9Mr-Fe3ZJqjZmeXLkGPCYKyE5Yao.jar
    Nov 04, 2021 7:02:51 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-v2JS2V2dkj3qXLhVK9Ul4OJ6GeVQ5jMoiZR6oWBlpdY.jar
    Nov 04, 2021 7:02:51 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-CXmhBqVNRgN9JeVGND8HiM9U1nK4J3vmCy9aZEhaMO8.jar
    Nov 04, 2021 7:02:51 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/direct-java/build/libs/beam-runners-direct-java-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-runners-direct-java-2.35.0-SNAPSHOT-AkxsV76KCMJ8p8C1LGEqi6WpIkQaUBjEgsky3XdG3fE.jar
    Nov 04, 2021 7:02:51 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/fn-execution/build/libs/beam-sdks-java-fn-execution-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-fn-execution-2.35.0-SNAPSHOT-HYzqwQ1YSAkUl93DJuIXInSZPmueTA0DKYvs5us5hAs.jar
    Nov 04, 2021 7:02:51 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/google-cloud-bigtable-emulator/0.137.1/b1639aa134de1302e43d9e9c3843f6ff853c510f/google-cloud-bigtable-emulator-0.137.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-cloud-bigtable-emulator-0.137.1-V2SlLU4BjIGf5QoF7YL-A-QNjw49NdXrNM4QoX9MXSI.jar
    Nov 04, 2021 7:02:53 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 245 files cached, 7 files newly uploaded in 2 seconds
    Nov 04, 2021 7:02:53 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 04, 2021 7:02:53 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <105620 bytes, hash 7b297d664406f46926f0cab963be77fa45c2cb205542ee4c9a8b38a6e5e9c871> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-eyl9ZkQG9Gkm8Mq5Y753-kXCyyBVQu5Mmos4puXpyHE.pb
    Nov 04, 2021 7:02:55 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 04, 2021 7:02:56 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 04, 2021 7:02:56 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 04, 2021 7:02:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 04, 2021 7:02:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-04_12_02_56-14662661296618491196?project=apache-beam-testing
    Nov 04, 2021 7:02:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-04_12_02_56-14662661296618491196
    Nov 04, 2021 7:02:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-04_12_02_56-14662661296618491196
    Nov 04, 2021 7:03:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-04T19:03:01.678Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 04, 2021 7:03:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T19:03:09.206Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 04, 2021 7:03:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T19:03:09.959Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 04, 2021 7:03:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T19:03:09.993Z: Expanding GroupByKey operations into optimizable parts.
    Nov 04, 2021 7:03:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T19:03:10.019Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 04, 2021 7:03:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T19:03:10.106Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 04, 2021 7:03:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T19:03:10.140Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 04, 2021 7:03:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T19:03:10.175Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 04, 2021 7:03:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T19:03:10.515Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 04, 2021 7:03:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T19:03:10.587Z: Starting 5 workers in us-central1-c...
    Nov 04, 2021 7:03:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T19:03:23.599Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 04, 2021 7:03:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T19:03:55.208Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 04, 2021 7:04:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T19:04:20.033Z: Workers have started successfully.
    Nov 04, 2021 7:04:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T19:04:20.073Z: Workers have started successfully.
    Nov 04, 2021 7:04:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T19:04:51.241Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 04, 2021 7:04:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T19:04:51.429Z: Cleaning up.
    Nov 04, 2021 7:04:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T19:04:51.518Z: Stopping worker pool...
    Nov 04, 2021 7:07:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T19:07:14.329Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 04, 2021 7:07:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T19:07:14.370Z: Worker pool stopped.
    Nov 04, 2021 7:07:21 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-04_12_02_56-14662661296618491196 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 93a54046-1295-4a6d-a257-8ae0c726701b and timestamp: 2021-11-04T19:07:21.624000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.858

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 04, 2021 7:07:21 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 5 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 46.807 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 14m 4s
152 actionable tasks: 122 executed, 30 from cache

Publishing build scan...
https://gradle.com/s/df3l5rwzhpgzg

Stopped 4 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2628

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2628/display/redirect?page=changes>

Changes:

[mmack] [BEAM-11440] Add integration test for KinesisIO using Localstack on

[mmack] Bump localstack container version for more robust version with respect

[mmack] Bump localstack container version for DynamoDBIOTest to version used in

[mmack] Apply format

[mmack] Apply format

[mmack] Use vendored guava deps


------------------------------------------
[...truncated 338.69 KB...]
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 04, 2021 12:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 04, 2021 12:44:52 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 04, 2021 12:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 04, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 04, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 04, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 04, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 04, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@825221501]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 04, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 04, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 04, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 04, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 04, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@116095623]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 04, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 04, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 04, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 04, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 04, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 04, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 04, 2021 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 04, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 04, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-2oUL_N4em_khgkLkL9vgMQd6n8CGvDHBrAAkT7iMDe8.jar
    Nov 04, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT-h5Vm59YQQa-RcmHCCG09Iw6skDekxZqFrPsGtnvUHB4.jar
    Nov 04, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6308844425071339762.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-a707v9Ri6qqhdNj0OkxUY-bDg8yAaNcYsZBXbWbmubU.jar
    Nov 04, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/join-library/build/libs/beam-sdks-java-extensions-join-library-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-join-library-2.35.0-SNAPSHOT-xz-aSlhizJi8jigGo3amZ2g_MkglE3dBulicsi2UX5U.jar
    Nov 04, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT-tests-I-osQrG-lSEES6RD4ibuovIeVl-_U3nOEEsBLo1rYrA.jar
    Nov 04, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/udf/build/libs/beam-sdks-java-extensions-sql-udf-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-udf-2.35.0-SNAPSHOT-MlQQAS1Je1cH4lXtIOKM3zL6o97S5F7sJQ3zUailw08.jar
    Nov 04, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 245 files cached, 5 files newly uploaded in 0 seconds
    Nov 04, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 04, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104892 bytes, hash 8d1a5b183dddffda91dff0a0b309cb1babc2775b56f225a2f1c69b38b5555745> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-jRpbGD3d_9qR3_CgswnLG6vCd1tW8iWi8cabOLVVV0U.pb
    Nov 04, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 04, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 04, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 04, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 04, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-04_05_45_04-14129161882371676985?project=apache-beam-testing
    Nov 04, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-04_05_45_04-14129161882371676985
    Nov 04, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-04_05_45_04-14129161882371676985
    Nov 04, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-04T12:45:07.450Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 04, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T12:45:12.891Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 04, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T12:45:13.525Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 04, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T12:45:13.550Z: Expanding GroupByKey operations into optimizable parts.
    Nov 04, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T12:45:13.572Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 04, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T12:45:13.632Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 04, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T12:45:13.676Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 04, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T12:45:13.701Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 04, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T12:45:14.026Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 04, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T12:45:14.104Z: Starting 5 workers in us-central1-a...
    Nov 04, 2021 12:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T12:45:39.932Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 04, 2021 12:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T12:45:57.864Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Nov 04, 2021 12:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T12:45:57.899Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Nov 04, 2021 12:46:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T12:46:08.145Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 04, 2021 12:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T12:46:31.248Z: Workers have started successfully.
    Nov 04, 2021 12:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T12:46:31.282Z: Workers have started successfully.
    Nov 04, 2021 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T12:46:59.221Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 04, 2021 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T12:46:59.350Z: Cleaning up.
    Nov 04, 2021 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T12:46:59.424Z: Stopping worker pool...
    Nov 04, 2021 12:49:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T12:49:32.552Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 04, 2021 12:49:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T12:49:32.634Z: Worker pool stopped.
    Nov 04, 2021 12:49:39 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-04_05_45_04-14129161882371676985 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): cb582474-17a9-4a9e-9e13-e0806cf89770 and timestamp: 2021-11-04T12:49:39.815000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.731

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 04, 2021 12:49:39 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 51.561 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 19s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/dojtsbwlnzdlk

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2627

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2627/display/redirect>

Changes:


------------------------------------------
[...truncated 341.79 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 7918487ff3e2167c7fd65eb5bc00d48d
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 04, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 04, 2021 6:45:07 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 04, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 04, 2021 6:45:10 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 04, 2021 6:45:10 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 6:45:11 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 04, 2021 6:45:11 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 04, 2021 6:45:11 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 6:45:11 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 04, 2021 6:45:11 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1334773707]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1596936525]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 04, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 04, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 04, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 04, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 04, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-2oUL_N4em_khgkLkL9vgMQd6n8CGvDHBrAAkT7iMDe8.jar
    Nov 04, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4455652386111425539.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-5FRRxV0QA_YYLcSBPRnkSGheyJBcTfb-kQr-7EejQAk.jar
    Nov 04, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Nov 04, 2021 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 04, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104891 bytes, hash 74f9bc57c48b1a71db9a27c78fb0eef2fc93a893f8b4f3f7659257f7a6a50f69> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-dPm8V8SLGnHbmifHj7Du8vyTqJP4tPP3ZZJX96alD2k.pb
    Nov 04, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 04, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 04, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 04, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 04, 2021 6:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-03_23_45_21-7813787463402869550?project=apache-beam-testing
    Nov 04, 2021 6:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-03_23_45_21-7813787463402869550
    Nov 04, 2021 6:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-03_23_45_21-7813787463402869550
    Nov 04, 2021 6:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-04T06:45:28.416Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 04, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:45:37.405Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 04, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:45:38.009Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 04, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:45:38.054Z: Expanding GroupByKey operations into optimizable parts.
    Nov 04, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:45:38.086Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 04, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:45:38.148Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 04, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:45:38.179Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 04, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:45:38.212Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 04, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:45:38.544Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 04, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:45:38.645Z: Starting 5 workers in us-central1-a...
    Nov 04, 2021 6:46:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:46:06.017Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 04, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:46:28.366Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 04, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:46:54.525Z: Workers have started successfully.
    Nov 04, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:46:54.550Z: Workers have started successfully.
    Nov 04, 2021 6:47:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:47:23.836Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 04, 2021 6:47:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:47:23.971Z: Cleaning up.
    Nov 04, 2021 6:47:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:47:24.048Z: Stopping worker pool...
    Nov 04, 2021 6:49:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:49:46.638Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 04, 2021 6:49:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T06:49:46.683Z: Worker pool stopped.
    Nov 04, 2021 6:49:53 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-03_23_45_21-7813787463402869550 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): ea3bdc43-72de-42cd-affc-7a235d177c5d and timestamp: 2021-11-04T06:49:53.546000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.355

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 04, 2021 6:49:53 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 50.528 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 34s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/hzsd35ktbby34

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2626

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2626/display/redirect?page=changes>

Changes:

[noreply] Add missing parentheses for Python test example

[noreply] Update test input and imports

[25622840+adude3141] [BEAM-13157] support hadoop configuration on ParquetIO.Parse

[Pablo Estrada] JdbcIO has a single WriteFn underlying all implementations

[Luke Cwik] [BEAM-13015] Use a network based channel instead of an inmemory one

[Pablo Estrada] Addressing comments

[noreply] Merge pull request #15839 from [BEAM-13041][Playground] Prepare files

[noreply] [BEAM-13001] updated CHANGES.md to include msec counter for Go (#15872)

[noreply] Add window mapping to CHANGES.md (#15871)

[noreply] Merge pull request #15784 from [BEAM-8135]  - Removing

[noreply] [BEAM-13119] Subdirectory prefix tag for Go SDK (#15881)

[noreply] Merge pull request #15852 from [BEAM-13102] [Playground] update

[Kyle Weaver] Fix typo: s/spark/twister2


------------------------------------------
[...truncated 341.49 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 7918487ff3e2167c7fd65eb5bc00d48d
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 04, 2021 12:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 04, 2021 12:45:19 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 04, 2021 12:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 04, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 04, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 04, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 04, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 04, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1958174945]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 04, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 04, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 04, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 04, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 04, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@307649172]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 04, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 04, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 04, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 04, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 04, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 04, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 04, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 04, 2021 12:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 04, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 04, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-2oUL_N4em_khgkLkL9vgMQd6n8CGvDHBrAAkT7iMDe8.jar
    Nov 04, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT-tests-wq460SWma404IxeH4GpbhmTrsPsdyGkKfzx-yBtHWMs.jar
    Nov 04, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6646685300132765611.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-gm7il0NID2ImTTV4QaFV_gtdJa36-04og-BPG3yu7U4.jar
    Nov 04, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Nov 04, 2021 12:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 04, 2021 12:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104892 bytes, hash 2496d79ee130178d719ba56cce06ea66df7e7870ebc93f3dd59fe567819d7d05> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-JJbXnuEwF41xm6VszgbqZt9-eHDryT891Z_lZ4GdfQU.pb
    Nov 04, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 04, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 04, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 04, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 04, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-03_17_45_32-3784561410186712558?project=apache-beam-testing
    Nov 04, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-03_17_45_32-3784561410186712558
    Nov 04, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-03_17_45_32-3784561410186712558
    Nov 04, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-04T00:45:36.229Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 04, 2021 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:45:43.988Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 04, 2021 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:45:44.803Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 04, 2021 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:45:44.833Z: Expanding GroupByKey operations into optimizable parts.
    Nov 04, 2021 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:45:44.860Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 04, 2021 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:45:44.938Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 04, 2021 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:45:44.964Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 04, 2021 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:45:45.011Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 04, 2021 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:45:45.416Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 04, 2021 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:45:45.495Z: Starting 5 workers in us-central1-a...
    Nov 04, 2021 12:46:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:46:17.321Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 04, 2021 12:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:46:28.904Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Nov 04, 2021 12:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:46:28.967Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Nov 04, 2021 12:46:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:46:39.446Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 04, 2021 12:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:47:02.259Z: Workers have started successfully.
    Nov 04, 2021 12:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:47:02.284Z: Workers have started successfully.
    Nov 04, 2021 12:47:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:47:32.087Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 04, 2021 12:47:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:47:32.228Z: Cleaning up.
    Nov 04, 2021 12:47:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:47:32.306Z: Stopping worker pool...
    Nov 04, 2021 12:49:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:49:52.832Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 04, 2021 12:49:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-04T00:49:52.881Z: Worker pool stopped.
    Nov 04, 2021 12:49:58 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-03_17_45_32-3784561410186712558 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 07c8576d-d8de-4c52-978d-96b19d320a6b and timestamp: 2021-11-04T00:49:58.585000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.819

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 04, 2021 12:49:58 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 44.362 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 37s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/cwrhxnbd4reym

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2625

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2625/display/redirect?page=changes>

Changes:

[stranniknm] [BEAM-13105] playground - add shorcut hint tooltip

[dpcollins] Performance improvement to PubSubLiteIO to not use a streaming committer

[noreply] Merge pull request #15854 from [BEAM-13046][Playground] protobuf

[noreply] [BEAM-12550] Parallelizable skew Implementation  (#15809)

[noreply] Merge pull request #15782 from [BEAM-13034] [Playground] add semantics

[hlagosperez] Use BigQuieryIO.loadProjectId  in WriteRename class to create

[noreply] [BEAM-13099] Use vendored Calcite 1.28.0 in SQL extensions (#15836)


------------------------------------------
[...truncated 351.67 KB...]
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 232'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 232'
Successfully started process 'Gradle Test Executor 232'

Gradle Test Executor 232 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 03, 2021 7:09:20 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 03, 2021 7:09:21 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 03, 2021 7:09:22 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 03, 2021 7:09:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 7:09:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 7:09:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 03, 2021 7:09:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 7:09:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 7:09:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 03, 2021 7:09:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1958174945]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@307649172]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:84)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 7:09:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 03, 2021 7:09:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 03, 2021 7:09:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 03, 2021 7:09:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 03, 2021 7:09:34 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 03, 2021 7:09:34 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-2oUL_N4em_khgkLkL9vgMQd6n8CGvDHBrAAkT7iMDe8.jar
    Nov 03, 2021 7:09:35 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-tests-j-IzoedawBtFeEQjTP_qkinAzKtlGqtdDTI4hpidSs4.jar
    Nov 03, 2021 7:09:35 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-nCFHg_Ok6oUNqJWQK4aGsEvd0a5KXUR12Et1-E8vKtA.jar
    Nov 03, 2021 7:09:35 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5536625703926143093.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-pXm-nRgXs_lcCEVEJ9vb-adUaCE_Z3vZh5VChJGHlT0.jar
    Nov 03, 2021 7:09:35 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.beam/beam-vendor-calcite-1_28_0/0.1/1c51cee3b49ce084d1e0140b0344ff9151a61c3/beam-vendor-calcite-1_28_0-0.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/beam-vendor-calcite-1_28_0-0.1-rC8q7_PRvA8jzRG9mkOwFoyPqh5LDK33P7R0J3HxuNY.jar
    Nov 03, 2021 7:09:36 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 246 files cached, 4 files newly uploaded in 1 seconds
    Nov 03, 2021 7:09:36 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 03, 2021 7:09:36 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104892 bytes, hash 1fb3d0d4bbe9abd9019153e842179939b4768783aa4309e6eb8a0b1365d173cf> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-H7PQ1Lvpq9kBkVPoQheZObR2h4OqQwnm64oLE2XRc88.pb
    Nov 03, 2021 7:09:39 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 03, 2021 7:09:39 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 03, 2021 7:09:39 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 03, 2021 7:09:39 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 03, 2021 7:09:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-03_12_09_39-4499515394267361049?project=apache-beam-testing
    Nov 03, 2021 7:09:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-03_12_09_39-4499515394267361049
    Nov 03, 2021 7:09:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-03_12_09_39-4499515394267361049
    Nov 03, 2021 7:09:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-03T19:09:42.559Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 03, 2021 7:09:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:09:50.175Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 03, 2021 7:09:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:09:51.145Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 03, 2021 7:09:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:09:51.185Z: Expanding GroupByKey operations into optimizable parts.
    Nov 03, 2021 7:09:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:09:51.219Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 03, 2021 7:09:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:09:51.294Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 03, 2021 7:09:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:09:51.315Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 03, 2021 7:09:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:09:51.367Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 03, 2021 7:09:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:09:51.714Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 03, 2021 7:09:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:09:51.786Z: Starting 5 workers in us-central1-c...
    Nov 03, 2021 7:09:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:09:56.839Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 03, 2021 7:10:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:10:31.894Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 03, 2021 7:11:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:10:58.272Z: Workers have started successfully.
    Nov 03, 2021 7:11:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:10:58.305Z: Workers have started successfully.
    Nov 03, 2021 7:11:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:11:29.353Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 03, 2021 7:11:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:11:29.464Z: Cleaning up.
    Nov 03, 2021 7:11:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:11:29.550Z: Stopping worker pool...
    Nov 03, 2021 7:14:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:14:00.886Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 03, 2021 7:14:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T19:14:00.982Z: Worker pool stopped.
    Nov 03, 2021 7:14:07 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-03_12_09_39-4499515394267361049 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): cfc9a44a-962e-4507-816d-c014f3ac2f55 and timestamp: 2021-11-03T19:14:07.073000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                      9.18

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 03, 2021 7:14:07 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 232 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.005 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.003 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 52.203 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 11m 18s
152 actionable tasks: 106 executed, 46 from cache

Publishing build scan...
https://gradle.com/s/x26oq4t425zya

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2624

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2624/display/redirect>

Changes:


------------------------------------------
[...truncated 338.08 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9be2c8afe89e1c7297cec7ad1265b140
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 03, 2021 12:44:49 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 03, 2021 12:44:50 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 03, 2021 12:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 03, 2021 12:44:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 12:44:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 12:44:53 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 03, 2021 12:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 12:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 03, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1591513427]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 03, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 03, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 03, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 03, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 03, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2135699149]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 03, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 03, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 03, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 03, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 03, 2021 12:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 03, 2021 12:44:58 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 03, 2021 12:44:58 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-2oUL_N4em_khgkLkL9vgMQd6n8CGvDHBrAAkT7iMDe8.jar
    Nov 03, 2021 12:44:58 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7606525479549177092.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-6Pf_rRZo9FHC4KgsnWgrOk3n0kTO4e1txJWFCKxbCjM.jar
    Nov 03, 2021 12:44:59 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Nov 03, 2021 12:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 03, 2021 12:44:59 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104892 bytes, hash 010a725eb0ad2a1b0c23838c477f418f79093de9b022a685eef3031f25cc74bc> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-AQpyXrCtKhsMI4OMR39Bj3kJPemwIqaF7vMDHyXMdLw.pb
    Nov 03, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 03, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 03, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 03, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 03, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-03_05_45_01-9053871630973985449?project=apache-beam-testing
    Nov 03, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-03_05_45_01-9053871630973985449
    Nov 03, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-03_05_45_01-9053871630973985449
    Nov 03, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-03T12:45:05.538Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 03, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:45:10.285Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 03, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:45:10.916Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 03, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:45:10.950Z: Expanding GroupByKey operations into optimizable parts.
    Nov 03, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:45:10.986Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 03, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:45:11.056Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 03, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:45:11.085Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 03, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:45:11.108Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 03, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:45:11.423Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 03, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:45:11.492Z: Starting 5 workers in us-central1-a...
    Nov 03, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:45:23.144Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 03, 2021 12:46:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:46:07.742Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 03, 2021 12:46:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:46:34.075Z: Workers have started successfully.
    Nov 03, 2021 12:46:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:46:34.094Z: Workers have started successfully.
    Nov 03, 2021 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:47:00.277Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 03, 2021 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:47:00.402Z: Cleaning up.
    Nov 03, 2021 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:47:00.483Z: Stopping worker pool...
    Nov 03, 2021 12:49:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:49:27.353Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 03, 2021 12:49:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T12:49:27.392Z: Worker pool stopped.
    Nov 03, 2021 12:49:34 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-03_05_45_01-9053871630973985449 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4a192225-7014-4b47-959a-ca92c671ad69 and timestamp: 2021-11-03T12:49:34.179000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.421

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 03, 2021 12:49:34 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 48.631 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 17s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/2btndnzz3rrze

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2623

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2623/display/redirect>

Changes:


------------------------------------------
[...truncated 337.52 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9be2c8afe89e1c7297cec7ad1265b140
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 03, 2021 6:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 03, 2021 6:44:52 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 03, 2021 6:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 03, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 03, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 03, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@876027468]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 03, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 03, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 03, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 03, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 03, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1730798756]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 03, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 03, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 03, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 03, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 03, 2021 6:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 03, 2021 6:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 03, 2021 6:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-2oUL_N4em_khgkLkL9vgMQd6n8CGvDHBrAAkT7iMDe8.jar
    Nov 03, 2021 6:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5795617607859905281.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-8FzY6dr2515Uc6k2q5vgTxguum7HFgvL6MV6d1T43dg.jar
    Nov 03, 2021 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Nov 03, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 03, 2021 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104891 bytes, hash fd23cb9ba13ed2dbe2a62eef9d00f3c0d51dd74d209819e091e9f8c6ff93b418> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-_SPLm6E-0tvipi7vnQDzwNUd100gmBngken4xv-TtBg.pb
    Nov 03, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 03, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 03, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 03, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 03, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-02_23_45_04-16838286692204333905?project=apache-beam-testing
    Nov 03, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-02_23_45_04-16838286692204333905
    Nov 03, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-02_23_45_04-16838286692204333905
    Nov 03, 2021 6:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-03T06:45:07.854Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 03, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:45:14.350Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 03, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:45:15.179Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 03, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:45:15.219Z: Expanding GroupByKey operations into optimizable parts.
    Nov 03, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:45:15.256Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 03, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:45:15.320Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 03, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:45:15.338Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 03, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:45:15.359Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 03, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:45:15.664Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 03, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:45:15.726Z: Starting 5 workers in us-central1-c...
    Nov 03, 2021 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:45:42.218Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 03, 2021 6:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:45:58.404Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 03, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:46:24.052Z: Workers have started successfully.
    Nov 03, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:46:24.078Z: Workers have started successfully.
    Nov 03, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:46:53.600Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 03, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:46:53.721Z: Cleaning up.
    Nov 03, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:46:53.788Z: Stopping worker pool...
    Nov 03, 2021 6:49:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:49:20.340Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 03, 2021 6:49:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T06:49:20.382Z: Worker pool stopped.
    Nov 03, 2021 6:49:26 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-02_23_45_04-16838286692204333905 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f30d6cb1-8c74-4b13-b46b-59f8e38672d6 and timestamp: 2021-11-03T06:49:26.204000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.816

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 03, 2021 6:49:26 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 37.881 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 6s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/kfmfqqp6ee4bi

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2622

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2622/display/redirect?page=changes>

Changes:

[Andrew Pilloud] [BEAM-13118] Use branch for nightly "tag", tags are immutable

[dmitrii_kuzin] [BEAM-12730] Python. Custom delimiter add corner case

[noreply] Minor: add resource key to presentation materials link (#15867)


------------------------------------------
[...truncated 337.40 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9be2c8afe89e1c7297cec7ad1265b140
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 03, 2021 12:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 03, 2021 12:44:56 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 03, 2021 12:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 03, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 03, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1499886119]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@76893149]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 03, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 03, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 03, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 03, 2021 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 03, 2021 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-2oUL_N4em_khgkLkL9vgMQd6n8CGvDHBrAAkT7iMDe8.jar
    Nov 03, 2021 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7070725395123530739.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-GY5XTQWkkq5srSiHxHT3ygyhmw90uTm0UAMrc-03H3M.jar
    Nov 03, 2021 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Nov 03, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 03, 2021 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 669bbc407586561a0da573370b5ecb344d546fb7d9ea60c3057f4dc7326b1cac> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Zpu8QHWGVhoNpXM3C17LNE1Ub7fZ6mDDBX9NxzJrHKw.pb
    Nov 03, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 03, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 03, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 03, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 03, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-02_17_45_08-3250855308008233230?project=apache-beam-testing
    Nov 03, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-02_17_45_08-3250855308008233230
    Nov 03, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-02_17_45_08-3250855308008233230
    Nov 03, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-03T00:45:12.708Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 03, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:45:19.793Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 03, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:45:20.656Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 03, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:45:20.692Z: Expanding GroupByKey operations into optimizable parts.
    Nov 03, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:45:20.728Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 03, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:45:20.803Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 03, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:45:20.829Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 03, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:45:20.864Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 03, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:45:21.193Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 03, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:45:21.279Z: Starting 5 workers in us-central1-c...
    Nov 03, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:45:42.625Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 03, 2021 12:46:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:46:07.501Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 03, 2021 12:46:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:46:33.979Z: Workers have started successfully.
    Nov 03, 2021 12:46:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:46:34.014Z: Workers have started successfully.
    Nov 03, 2021 12:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:47:06.147Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 03, 2021 12:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:47:06.298Z: Cleaning up.
    Nov 03, 2021 12:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:47:06.382Z: Stopping worker pool...
    Nov 03, 2021 12:49:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:49:43.173Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 03, 2021 12:49:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-03T00:49:43.255Z: Worker pool stopped.
    Nov 03, 2021 12:49:52 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-02_17_45_08-3250855308008233230 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): ea8fcef7-13e7-47e2-843b-217f4aca3c90 and timestamp: 2021-11-03T00:49:52.543000000Z:
                     Metric:                    Value:
                   read_time                     8.287
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 03, 2021 12:49:53 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 5 mins 0.651 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 35s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/us5b75cof2lhu

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2621

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2621/display/redirect?page=changes>

Changes:

[Alexey Romanenko] [BEAM-12070] ParquetIO: use splittable reading by default

[noreply] [BEAM-13001] fixes nil reference error in Extractor.ExtractFrom (#15865)

[noreply] Merge pull request #15842 from [BEAM-13125][Playground] Update

[noreply] Merge pull request #15838 from [BEAM-13127] [Playground] Implement TCP

[noreply] Merge pull request #15832 from [BEAM-13149] Changing readWithPartitions


------------------------------------------
[...truncated 340.84 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9be2c8afe89e1c7297cec7ad1265b140
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 02, 2021 6:45:43 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 02, 2021 6:45:44 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 02, 2021 6:45:44 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 02, 2021 6:45:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 6:45:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 6:45:49 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 02, 2021 6:45:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 6:45:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 6:45:49 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 02, 2021 6:45:50 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1591513427]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 02, 2021 6:45:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 02, 2021 6:45:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 6:45:50 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 02, 2021 6:45:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 02, 2021 6:45:50 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 6:45:50 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 02, 2021 6:45:50 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1156856349]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 02, 2021 6:45:51 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 6:45:51 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 6:45:51 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 02, 2021 6:45:51 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 6:45:51 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 6:45:51 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 02, 2021 6:45:51 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 02, 2021 6:45:51 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 02, 2021 6:45:51 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 02, 2021 6:45:54 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 02, 2021 6:45:54 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-2oUL_N4em_khgkLkL9vgMQd6n8CGvDHBrAAkT7iMDe8.jar
    Nov 02, 2021 6:45:55 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7244616855344156846.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-QcTO2FE2rdJhsT1DuYDlHyB5u62L6B2JxfVGSxa2d-4.jar
    Nov 02, 2021 6:45:56 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 1 seconds
    Nov 02, 2021 6:45:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 02, 2021 6:45:56 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 8b1bc08211da6b968ddc53b1d99ca763ab9a87c01e01a1f9ce345a88dc94e2fe> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-ixvAghHaa5aN3FOx2ZynY6uah8AeAaH5zjRaiNyU4v4.pb
    Nov 02, 2021 6:45:58 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 02, 2021 6:45:59 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 02, 2021 6:45:59 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 02, 2021 6:45:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 02, 2021 6:46:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-02_11_45_59-6555322559487291116?project=apache-beam-testing
    Nov 02, 2021 6:46:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-02_11_45_59-6555322559487291116
    Nov 02, 2021 6:46:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-02_11_45_59-6555322559487291116
    Nov 02, 2021 6:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-02T18:46:02.598Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 02, 2021 6:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:46:09.826Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 02, 2021 6:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:46:10.672Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 02, 2021 6:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:46:10.702Z: Expanding GroupByKey operations into optimizable parts.
    Nov 02, 2021 6:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:46:10.727Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 02, 2021 6:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:46:10.789Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 02, 2021 6:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:46:10.844Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 02, 2021 6:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:46:10.870Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 02, 2021 6:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:46:11.159Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 02, 2021 6:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:46:11.218Z: Starting 5 workers in us-central1-c...
    Nov 02, 2021 6:46:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:46:42.877Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 02, 2021 6:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:46:51.962Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 02, 2021 6:47:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:47:18.262Z: Workers have started successfully.
    Nov 02, 2021 6:47:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:47:18.287Z: Workers have started successfully.
    Nov 02, 2021 6:47:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:47:48.958Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 02, 2021 6:47:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:47:49.522Z: Cleaning up.
    Nov 02, 2021 6:47:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:47:49.610Z: Stopping worker pool...
    Nov 02, 2021 6:50:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:50:16.630Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 02, 2021 6:50:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T18:50:16.672Z: Worker pool stopped.
    Nov 02, 2021 6:50:23 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-02_11_45_59-6555322559487291116 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 9b0f029b-19bf-446c-8bb5-8c8fef6b2c68 and timestamp: 2021-11-02T18:50:23.504000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.528

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 02, 2021 6:50:23 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 45.609 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 55s
152 actionable tasks: 97 executed, 55 from cache

Publishing build scan...
https://gradle.com/s/vgie4525rvl62

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2620

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2620/display/redirect>

Changes:


------------------------------------------
[...truncated 337.98 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9be2c8afe89e1c7297cec7ad1265b140
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 02, 2021 12:44:48 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 02, 2021 12:44:49 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 02, 2021 12:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 02, 2021 12:44:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 12:44:52 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 12:44:53 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 02, 2021 12:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 12:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 12:44:53 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 02, 2021 12:44:53 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1499886119]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@76893149]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 02, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 02, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 02, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 02, 2021 12:44:58 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 02, 2021 12:44:58 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-2oUL_N4em_khgkLkL9vgMQd6n8CGvDHBrAAkT7iMDe8.jar
    Nov 02, 2021 12:44:58 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3842048782514570956.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-B6RU5pjy3QsQhPg_OyHW7a6dnrefS8u8c1vQ68b2Xus.jar
    Nov 02, 2021 12:44:58 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Nov 02, 2021 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 02, 2021 12:44:58 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 9f916284b45e87b784dff29ff402647589963c71d357a4a07325d9c9b40ff061> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-n5FihLReh7eE3_Kf9AJkdYmWPHHTV6SgcyXZybQP8GE.pb
    Nov 02, 2021 12:45:00 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 02, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 02, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 02, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 02, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-02_05_45_01-7694486272515793678?project=apache-beam-testing
    Nov 02, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-02_05_45_01-7694486272515793678
    Nov 02, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-02_05_45_01-7694486272515793678
    Nov 02, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-02T12:45:04.664Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 02, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:45:11.089Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 02, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:45:12.020Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 02, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:45:12.068Z: Expanding GroupByKey operations into optimizable parts.
    Nov 02, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:45:12.103Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 02, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:45:12.207Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 02, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:45:12.248Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 02, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:45:12.275Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 02, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:45:12.647Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 02, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:45:12.725Z: Starting 5 workers in us-central1-c...
    Nov 02, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:45:33.974Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 02, 2021 12:45:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:45:56.823Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 02, 2021 12:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:46:23.076Z: Workers have started successfully.
    Nov 02, 2021 12:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:46:23.115Z: Workers have started successfully.
    Nov 02, 2021 12:46:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:46:54.255Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 02, 2021 12:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:46:54.406Z: Cleaning up.
    Nov 02, 2021 12:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:46:54.493Z: Stopping worker pool...
    Nov 02, 2021 12:49:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:49:09.904Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 02, 2021 12:49:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T12:49:09.959Z: Worker pool stopped.
    Nov 02, 2021 12:49:15 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-02_05_45_01-7694486272515793678 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6c50898b-f669-4c5e-811a-f417ee70a90f and timestamp: 2021-11-02T12:49:15.186000000Z:
                     Metric:                    Value:
                   read_time                     9.862
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 02, 2021 12:49:15 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 30.156 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 4m 58s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/b76zit6bf3iji

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2619

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2619/display/redirect?page=changes>

Changes:

[aydar.zaynutdinov] [BEAM-13145][Playground]


------------------------------------------
[...truncated 340.34 KB...]
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9be2c8afe89e1c7297cec7ad1265b140
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 02, 2021 6:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 02, 2021 6:44:56 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 02, 2021 6:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 02, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 02, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 02, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1591513427]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2135699149]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 02, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 02, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 02, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 02, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 02, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-2oUL_N4em_khgkLkL9vgMQd6n8CGvDHBrAAkT7iMDe8.jar
    Nov 02, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3292997521250367930.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-vac6rsy2zot7Ye1q6iGWKk9dM9M7hihMRK6U3xtnfmo.jar
    Nov 02, 2021 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Nov 02, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 02, 2021 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104892 bytes, hash aa9868d6a69d306538efc2305917dfe40928aa9a0b30a22cd9b65907ae365b40> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-qpho1qadMGU478IwWRff5AkoqpoLMKIs2bZZB642W0A.pb
    Nov 02, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 02, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 02, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 02, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 02, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-01_23_45_10-6157933829983358251?project=apache-beam-testing
    Nov 02, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-01_23_45_10-6157933829983358251
    Nov 02, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-01_23_45_10-6157933829983358251
    Nov 02, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-02T06:45:13.930Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 02, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:45:21.648Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 02, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:45:22.243Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 02, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:45:22.272Z: Expanding GroupByKey operations into optimizable parts.
    Nov 02, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:45:22.293Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 02, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:45:22.370Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 02, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:45:22.400Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 02, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:45:22.429Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 02, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:45:22.774Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 02, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:45:22.854Z: Starting 5 workers in us-central1-c...
    Nov 02, 2021 6:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:45:52.273Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 02, 2021 6:46:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:46:02.349Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 02, 2021 6:46:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:46:29.915Z: Workers have started successfully.
    Nov 02, 2021 6:46:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:46:29.948Z: Workers have started successfully.
    Nov 02, 2021 6:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:46:59.219Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 02, 2021 6:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:46:59.361Z: Cleaning up.
    Nov 02, 2021 6:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:46:59.438Z: Stopping worker pool...
    Nov 02, 2021 6:49:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:49:21.175Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 02, 2021 6:49:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T06:49:21.219Z: Worker pool stopped.
    Nov 02, 2021 6:49:26 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-01_23_45_10-6157933829983358251 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4009beec-72e5-4360-9687-9249b74ff4ed and timestamp: 2021-11-02T06:49:26.885000000Z:
                     Metric:                    Value:
                   read_time                     8.163
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 02, 2021 6:49:27 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 36.408 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 9s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/xcnt5kaymwwxc

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2618

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2618/display/redirect?page=changes>

Changes:

[avilovpavel6] Added cleanup os env before and after tests

[noreply] [BEAM-13001] fixes Golint issues (#15859)

[noreply] Merge pull request #15840 from [Playground][BEAM-13146][Bugfix]


------------------------------------------
[...truncated 346.24 KB...]
Gradle Test Executor 15 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9be2c8afe89e1c7297cec7ad1265b140
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 15'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 15'
Successfully started process 'Gradle Test Executor 15'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 02, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 02, 2021 12:45:02 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 02, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 02, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 12:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 02, 2021 12:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 12:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 12:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 02, 2021 12:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1591513427]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1156856349]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 02, 2021 12:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 02, 2021 12:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 02, 2021 12:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 02, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 02, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 02, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-2oUL_N4em_khgkLkL9vgMQd6n8CGvDHBrAAkT7iMDe8.jar
    Nov 02, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT-tests-OSJn6rDYScnOqVtjqjWjn4keTL2V-mBSDXbhyjSfST0.jar
    Nov 02, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test803957735624882856.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-y2G5I8M9kBqBSuKvgS5sb2AJcWFXLFuhWpFb-qxDtMc.jar
    Nov 02, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT-tDWGgdT04k0kcm0eO5adgiF1BfJHG8lQ1W4REGrrw0g.jar
    Nov 02, 2021 12:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 3 files newly uploaded in 1 seconds
    Nov 02, 2021 12:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 02, 2021 12:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104892 bytes, hash 40e7a61a8b1022ef41ae767277a5d9292013a982afc95351071c750e083fc63e> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-QOemGosQIu9BrnZyd6XZKSATqYKvyVNRBxx1Dgg_xj4.pb
    Nov 02, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 02, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 02, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 02, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 02, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-01_17_45_16-13039375285617828759?project=apache-beam-testing
    Nov 02, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-01_17_45_16-13039375285617828759
    Nov 02, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-01_17_45_16-13039375285617828759
    Nov 02, 2021 12:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-02T00:45:19.836Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 02, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:45:26.386Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 02, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:45:27.091Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 02, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:45:27.121Z: Expanding GroupByKey operations into optimizable parts.
    Nov 02, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:45:27.155Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 02, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:45:27.239Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 02, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:45:27.269Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 02, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:45:27.300Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 02, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:45:27.634Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 02, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:45:27.700Z: Starting 5 workers in us-central1-c...
    Nov 02, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:45:36.899Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 02, 2021 12:46:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:46:12.033Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 02, 2021 12:46:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:46:37.384Z: Workers have started successfully.
    Nov 02, 2021 12:46:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:46:37.402Z: Workers have started successfully.
    Nov 02, 2021 12:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:47:05.153Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 02, 2021 12:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:47:05.294Z: Cleaning up.
    Nov 02, 2021 12:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:47:05.367Z: Stopping worker pool...
    Nov 02, 2021 12:49:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:49:26.343Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 02, 2021 12:49:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-02T00:49:26.379Z: Worker pool stopped.
    Nov 02, 2021 12:49:31 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-01_17_45_16-13039375285617828759 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8438149d-de28-40aa-b99d-1ec3e4b5cc57 and timestamp: 2021-11-02T00:49:31.834000000Z:
                     Metric:                    Value:
                   read_time                     8.296
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 02, 2021 12:49:32 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 15 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.008 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.004 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 33.786 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 11s
152 actionable tasks: 102 executed, 50 from cache

Publishing build scan...
https://gradle.com/s/volsy5iuok6fq

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2617

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2617/display/redirect?page=changes>

Changes:

[noreply] [BEAM-11936] Fix errorprone warnings (#15821)


------------------------------------------
[...truncated 347.10 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) started.
Gradle Test Executor 4 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 9be2c8afe89e1c7297cec7ad1265b140
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 4'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 4'
Successfully started process 'Gradle Test Executor 4'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 01, 2021 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 01, 2021 6:45:15 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 01, 2021 6:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 01, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 01, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 6:45:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 01, 2021 6:45:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1499886119]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@76893149]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 6:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 01, 2021 6:45:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 01, 2021 6:45:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 01, 2021 6:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 01, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 01, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-2oUL_N4em_khgkLkL9vgMQd6n8CGvDHBrAAkT7iMDe8.jar
    Nov 01, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3405281722735097461.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-qT-Q3fyfdvPuFsH3R1NAkPgT8PJT9fKtrNItXP9V8Ko.jar
    Nov 01, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 1 seconds
    Nov 01, 2021 6:45:26 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 01, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 8eda6dd26c3ca6753e6a567dbb4bad3357fab6a2faf9bb99c46fe8fde8832b9d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-jtpt0mw8pnU-alZ9u0utM1f6tqL6-buZxG_o_eiDK50.pb
    Nov 01, 2021 6:45:28 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 01, 2021 6:45:28 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 01, 2021 6:45:28 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 01, 2021 6:45:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 01, 2021 6:45:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-01_11_45_28-16407113518321098892?project=apache-beam-testing
    Nov 01, 2021 6:45:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-01_11_45_28-16407113518321098892
    Nov 01, 2021 6:45:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-01_11_45_28-16407113518321098892
    Nov 01, 2021 6:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-01T18:45:32.196Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 01, 2021 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:45:38.968Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 01, 2021 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:45:39.638Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 01, 2021 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:45:39.695Z: Expanding GroupByKey operations into optimizable parts.
    Nov 01, 2021 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:45:39.721Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 01, 2021 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:45:39.776Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 01, 2021 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:45:39.804Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 01, 2021 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:45:39.857Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 01, 2021 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:45:40.169Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 01, 2021 6:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:45:40.246Z: Starting 5 workers in us-central1-c...
    Nov 01, 2021 6:46:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:46:04.945Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 01, 2021 6:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:46:24.967Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 01, 2021 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:46:51.351Z: Workers have started successfully.
    Nov 01, 2021 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:46:51.375Z: Workers have started successfully.
    Nov 01, 2021 6:47:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:47:22.817Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 01, 2021 6:47:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:47:23.107Z: Cleaning up.
    Nov 01, 2021 6:47:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:47:23.190Z: Stopping worker pool...
    Nov 01, 2021 6:49:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:49:35.917Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 01, 2021 6:49:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T18:49:35.964Z: Worker pool stopped.
    Nov 01, 2021 6:49:44 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-01_11_45_28-16407113518321098892 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 75d5baac-d5ef-4a7e-a661-d0319742e5e6 and timestamp: 2021-11-01T18:49:44.186000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.351

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 01, 2021 6:49:44 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 4 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 33.779 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 24s
152 actionable tasks: 102 executed, 50 from cache

Publishing build scan...
https://gradle.com/s/bwd2cenuqqalq

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2616

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2616/display/redirect>

Changes:


------------------------------------------
[...truncated 338.24 KB...]
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 14b8371ba6b0d29011e8f83c564ee45c
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 01, 2021 12:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 01, 2021 12:44:55 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 01, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 01, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 01, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 01, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1850632974]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 01, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 01, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 01, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 01, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 01, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@785615728]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 01, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 01, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 01, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 01, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 01, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 01, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 01, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-8lDYMA-T6bCO4br-rIbZCyKvMzLEqUzZXHR69IET0wc.jar
    Nov 01, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test996361637803596029.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-7vBJw3xz9nE-MFn_YCL-kyxiSYBTh7lKy7d33tCiM5Y.jar
    Nov 01, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Nov 01, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 01, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 9a518e1a33aead2d7c93fdfcf6dff801298cd300b81a5cae909b83fd0fd95e42> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-mlGOGjOurS18k_389t_4ASmM0wC4GlyukJuD_Q_ZXkI.pb
    Nov 01, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 01, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 01, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 01, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 01, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-11-01_05_45_09-5328820062955668730?project=apache-beam-testing
    Nov 01, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-11-01_05_45_09-5328820062955668730
    Nov 01, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-11-01_05_45_09-5328820062955668730
    Nov 01, 2021 12:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-01T12:45:12.132Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 01, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:45:17.657Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 01, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:45:18.396Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 01, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:45:18.443Z: Expanding GroupByKey operations into optimizable parts.
    Nov 01, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:45:18.467Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 01, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:45:18.564Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 01, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:45:18.593Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 01, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:45:18.627Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 01, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:45:18.957Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 01, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:45:19.035Z: Starting 5 workers in us-central1-c...
    Nov 01, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:45:27.234Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 01, 2021 12:46:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:46:03.047Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 01, 2021 12:46:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:46:28.105Z: Workers have started successfully.
    Nov 01, 2021 12:46:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:46:28.145Z: Workers have started successfully.
    Nov 01, 2021 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:46:55.948Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 01, 2021 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:46:56.077Z: Cleaning up.
    Nov 01, 2021 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:46:56.156Z: Stopping worker pool...
    Nov 01, 2021 12:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:49:22.246Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 01, 2021 12:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T12:49:22.286Z: Worker pool stopped.
    Nov 01, 2021 12:49:28 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-11-01_05_45_09-5328820062955668730 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): ff6d4c7e-c40b-4b75-acf2-99c1f9027612 and timestamp: 2021-11-01T12:49:28.214000000Z:
                     Metric:                    Value:
                   read_time                     9.086
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 01, 2021 12:49:28 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 39.12 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 9s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/o74ovdvw6xuwm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2615

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2615/display/redirect>

Changes:


------------------------------------------
[...truncated 339.24 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 14b8371ba6b0d29011e8f83c564ee45c
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 01, 2021 6:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 01, 2021 6:44:56 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 01, 2021 6:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 01, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 01, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 01, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1812577242]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 01, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 01, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 01, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 01, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 01, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@882830612]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 01, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 01, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 01, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 01, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 01, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 01, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 01, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-8lDYMA-T6bCO4br-rIbZCyKvMzLEqUzZXHR69IET0wc.jar
    Nov 01, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1278691954197753490.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-8-s4wE0QdAf7vuorunZ2HiYBUrNG7vGkUkzRWhCH0mA.jar
    Nov 01, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Nov 01, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 01, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash b9540334cccc8ece51b8d8fa27c9e4c2c3a6111acf82b22d98f57de770fec7d1> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-uVQDNMzMjs5RuNj6J8nkwsOmERrPgrItmPV953D-x9E.pb
    Nov 01, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 01, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 01, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 01, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 01, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-31_23_45_09-11723723560549017728?project=apache-beam-testing
    Nov 01, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-31_23_45_09-11723723560549017728
    Nov 01, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-31_23_45_09-11723723560549017728
    Nov 01, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-01T06:45:14.760Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 01, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:45:21.202Z: Worker configuration: e2-standard-2 in us-central1-c.
    Nov 01, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:45:22.143Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 01, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:45:22.183Z: Expanding GroupByKey operations into optimizable parts.
    Nov 01, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:45:22.210Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 01, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:45:22.285Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 01, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:45:22.311Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 01, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:45:22.344Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 01, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:45:22.661Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 01, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:45:22.729Z: Starting 5 workers in us-central1-c...
    Nov 01, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:45:28.298Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 01, 2021 6:45:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:45:54.361Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Nov 01, 2021 6:45:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:45:54.387Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Nov 01, 2021 6:46:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:46:04.617Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 01, 2021 6:46:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:46:29.753Z: Workers have started successfully.
    Nov 01, 2021 6:46:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:46:29.784Z: Workers have started successfully.
    Nov 01, 2021 6:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:47:00.657Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 01, 2021 6:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:47:00.785Z: Cleaning up.
    Nov 01, 2021 6:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:47:00.865Z: Stopping worker pool...
    Nov 01, 2021 6:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:49:22.985Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 01, 2021 6:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T06:49:23.073Z: Worker pool stopped.
    Nov 01, 2021 6:49:30 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-31_23_45_09-11723723560549017728 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c94dbc5e-23b1-48da-b027-b875998264a4 and timestamp: 2021-11-01T06:49:30.272000000Z:
                     Metric:                    Value:
                   read_time                     10.03
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 01, 2021 6:49:30 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 39.545 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 11s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/56gjcngjnnhd6

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2614

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2614/display/redirect>

Changes:


------------------------------------------
[...truncated 339.42 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 14b8371ba6b0d29011e8f83c564ee45c
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Nov 01, 2021 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Nov 01, 2021 12:44:51 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Nov 01, 2021 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Nov 01, 2021 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 01, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 01, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1785069675]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Nov 01, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 01, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 01, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Nov 01, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 01, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@899766210]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 01, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Nov 01, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Nov 01, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Nov 01, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Nov 01, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Nov 01, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Nov 01, 2021 12:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Nov 01, 2021 12:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Nov 01, 2021 12:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-8lDYMA-T6bCO4br-rIbZCyKvMzLEqUzZXHR69IET0wc.jar
    Nov 01, 2021 12:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4463813952263075272.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-XTfUz0OREnp_VWMcN2KIpFSyqxsy5hvA7cBHm99jZYY.jar
    Nov 01, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Nov 01, 2021 12:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Nov 01, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104896 bytes, hash 5e12502ef02032396fe0b155b52fc5aedeb461e1439dd2a310450f405ca094a7> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-XhJQLvAgMjlv4LFVtS_Frt60YeFDndKjEEUPQFyglKc.pb
    Nov 01, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Nov 01, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Nov 01, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Nov 01, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Nov 01, 2021 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-31_17_45_03-8308257496170417740?project=apache-beam-testing
    Nov 01, 2021 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-31_17_45_03-8308257496170417740
    Nov 01, 2021 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-31_17_45_03-8308257496170417740
    Nov 01, 2021 12:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-11-01T00:45:09.581Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Nov 01, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:45:14.936Z: Worker configuration: e2-standard-2 in us-central1-a.
    Nov 01, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:45:15.536Z: Expanding CoGroupByKey operations into optimizable parts.
    Nov 01, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:45:15.576Z: Expanding GroupByKey operations into optimizable parts.
    Nov 01, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:45:15.602Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Nov 01, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:45:15.666Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Nov 01, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:45:15.689Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Nov 01, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:45:15.837Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Nov 01, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:45:16.237Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 01, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:45:16.306Z: Starting 5 workers in us-central1-a...
    Nov 01, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:45:44.175Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Nov 01, 2021 12:45:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:45:56.838Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Nov 01, 2021 12:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:46:24.217Z: Workers have started successfully.
    Nov 01, 2021 12:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:46:24.254Z: Workers have started successfully.
    Nov 01, 2021 12:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:46:51.765Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Nov 01, 2021 12:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:46:51.894Z: Cleaning up.
    Nov 01, 2021 12:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:46:51.982Z: Stopping worker pool...
    Nov 01, 2021 12:49:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:49:19.885Z: Autoscaling: Resized worker pool from 5 to 0.
    Nov 01, 2021 12:49:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-11-01T00:49:19.917Z: Worker pool stopped.
    Nov 01, 2021 12:49:26 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-31_17_45_03-8308257496170417740 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED
    Load test results for test (ID): 311aa12b-047d-427d-91be-61fb0480a630 and timestamp: 2021-11-01T00:49:26.477000000Z:
                     Metric:                    Value:
                   read_time                     7.593
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Nov 01, 2021 12:49:26 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 39.234 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 7s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/6nry5axhv7mmm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2613

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2613/display/redirect>

Changes:


------------------------------------------
[...truncated 339.20 KB...]
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 31, 2021 6:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 31, 2021 6:44:51 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 31, 2021 6:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 31, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 31, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 31, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1850632974]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 31, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 31, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 31, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 31, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 31, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@402527372]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 31, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 31, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 31, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 31, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 31, 2021 6:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 31, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 31, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-8lDYMA-T6bCO4br-rIbZCyKvMzLEqUzZXHR69IET0wc.jar
    Oct 31, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5176460154054228143.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-rjl6cpA24uQPJb6if8ugvHBLcTI9r8AgNOzQB8HWbz8.jar
    Oct 31, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.rnorth.visible-assertions/visible-assertions/2.1.2/20d31a578030ec8e941888537267d3123c2ad1c1/visible-assertions-2.1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/visible-assertions-2.1.2-RQSulosjfNzcto_1sHqmOr5JkvkHp3w9YSCqm5BBQBw.jar
    Oct 31, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.java.dev.jna/jna-platform/4.0.0/deb6bf66918989b50209b8c9aaf3b2561af7f011/jna-platform-4.0.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jna-platform-4.0.0-B21i7Yfna9yzdQ_-gKpJXuIRK9chJmOVTWVH9IJEkuk.jar
    Oct 31, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.rnorth.duct-tape/duct-tape/1.0.8/92edc22a9ab2f3e17c9bf700aaee377d50e8b530/duct-tape-1.0.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/duct-tape-1.0.8-Mc7xLd7JedH4bXz3CMQaF9pSPQXGhf1mQunQsq3bckA.jar
    Oct 31, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.java.dev.jna/jna/5.5.0/e0845217c4907822403912ad6828d8e0b256208/jna-5.5.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jna-5.5.0-swj66_5O1AnehBDgpjLRZLISawNfbqz_lo05CMr7TZ4.jar
    Oct 31, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 245 files cached, 5 files newly uploaded in 0 seconds
    Oct 31, 2021 6:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 31, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash c8cf0147efef9c70eb531945060f07d654c6d6c861ba2ed2f1ba862c610248af> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-yM8BR-_vnHDrUxlFBg8H1lTG1shhui7S8bqGLGECSK8.pb
    Oct 31, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 31, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 31, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 31, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 31, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-31_11_45_04-9800440872836558494?project=apache-beam-testing
    Oct 31, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-31_11_45_04-9800440872836558494
    Oct 31, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-31_11_45_04-9800440872836558494
    Oct 31, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-31T18:45:07.260Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 31, 2021 6:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:45:14.004Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 31, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:45:14.696Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 31, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:45:14.739Z: Expanding GroupByKey operations into optimizable parts.
    Oct 31, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:45:14.788Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 31, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:45:14.864Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 31, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:45:14.899Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 31, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:45:14.951Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 31, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:45:15.303Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 31, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:45:15.392Z: Starting 5 workers in us-central1-c...
    Oct 31, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:45:45.216Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 31, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:45:50.848Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 31, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:45:50.881Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 31, 2021 6:46:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:46:01.258Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 31, 2021 6:46:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:46:25.505Z: Workers have started successfully.
    Oct 31, 2021 6:46:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:46:25.536Z: Workers have started successfully.
    Oct 31, 2021 6:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:46:56.747Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 31, 2021 6:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:46:56.918Z: Cleaning up.
    Oct 31, 2021 6:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:46:57.004Z: Stopping worker pool...
    Oct 31, 2021 6:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:49:22.070Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 31, 2021 6:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T18:49:22.110Z: Worker pool stopped.
    Oct 31, 2021 6:49:28 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-31_11_45_04-9800440872836558494 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): da9f84e8-ebd6-4528-8df7-82f24dc20c8d and timestamp: 2021-10-31T18:49:28.349000000Z:
                     Metric:                    Value:
                   read_time                    10.991
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 31, 2021 6:49:28 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 41.251 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 9s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ivcdgxrmqzo6m

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2612

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2612/display/redirect>

Changes:


------------------------------------------
[...truncated 338.54 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 14b8371ba6b0d29011e8f83c564ee45c
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 31, 2021 12:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 31, 2021 12:44:53 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 31, 2021 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 31, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 31, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 31, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1850632974]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 31, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 31, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 31, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 31, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 31, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@785615728]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 31, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 31, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 31, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 31, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 31, 2021 12:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 31, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 31, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-8lDYMA-T6bCO4br-rIbZCyKvMzLEqUzZXHR69IET0wc.jar
    Oct 31, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4673906120842993167.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-RvOhR2-u4RYJ9c_iDXufjodi9WDKeIxY1Wim3pRCkOw.jar
    Oct 31, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 31, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 31, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 500a4585e3d19f01629912fd3c44e3d2cdba2f0fccda22019f18bb1dfd833d01> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-UApFhePRnwFimRL9PETj0s26Lw_M2iIBnxi7Hf2DPQE.pb
    Oct 31, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 31, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 31, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 31, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 31, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-31_05_45_05-5378782084699586586?project=apache-beam-testing
    Oct 31, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-31_05_45_05-5378782084699586586
    Oct 31, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-31_05_45_05-5378782084699586586
    Oct 31, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-31T12:45:09.051Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 31, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:45:17.921Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 31, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:45:18.679Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 31, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:45:18.714Z: Expanding GroupByKey operations into optimizable parts.
    Oct 31, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:45:18.750Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 31, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:45:18.816Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 31, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:45:18.849Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 31, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:45:18.882Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 31, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:45:19.277Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 31, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:45:19.376Z: Starting 5 workers in us-central1-c...
    Oct 31, 2021 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:45:41.091Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 31, 2021 12:46:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:46:05.128Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 31, 2021 12:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:46:32.380Z: Workers have started successfully.
    Oct 31, 2021 12:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:46:32.460Z: Workers have started successfully.
    Oct 31, 2021 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:47:00.594Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 31, 2021 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:47:00.753Z: Cleaning up.
    Oct 31, 2021 12:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:47:00.828Z: Stopping worker pool...
    Oct 31, 2021 12:49:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:49:25.409Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 31, 2021 12:49:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T12:49:25.459Z: Worker pool stopped.
    Oct 31, 2021 12:49:31 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-31_05_45_05-5378782084699586586 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): b81ad903-b309-4e5a-8a01-2dba8edd5019 and timestamp: 2021-10-31T12:49:31.966000000Z:
                     Metric:                    Value:
                   read_time                     7.043
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 31, 2021 12:49:32 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 44.234 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 14s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/a74a3p4ahu22w

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2611

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2611/display/redirect>

Changes:


------------------------------------------
[...truncated 338.35 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 14b8371ba6b0d29011e8f83c564ee45c
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 31, 2021 6:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 31, 2021 6:45:28 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 31, 2021 6:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 31, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 6:45:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 31, 2021 6:45:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 6:45:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 6:45:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 31, 2021 6:45:33 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1812577242]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 31, 2021 6:45:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 31, 2021 6:45:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 6:45:33 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 31, 2021 6:45:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 31, 2021 6:45:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 6:45:33 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 31, 2021 6:45:34 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1156856349]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 31, 2021 6:45:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 6:45:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 6:45:34 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 31, 2021 6:45:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 6:45:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 6:45:34 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 31, 2021 6:45:34 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 31, 2021 6:45:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 31, 2021 6:45:34 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 31, 2021 6:45:37 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 31, 2021 6:45:37 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-8lDYMA-T6bCO4br-rIbZCyKvMzLEqUzZXHR69IET0wc.jar
    Oct 31, 2021 6:45:37 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test362942635619535858.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-YW66o3GoK0jV1NRr0ewvjrLL635jZgtq5bobD1Cf34k.jar
    Oct 31, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 31, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 31, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash be6dec7821ddfc6faef0eb17c46931b0bad1900813e53c4ded742acb8af8de9c> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-vm3seCHd_G-u8OsXxGkxsLrRkAgT5TxN7XQqy4r43pw.pb
    Oct 31, 2021 6:45:40 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 31, 2021 6:45:40 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 31, 2021 6:45:40 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 31, 2021 6:45:40 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 31, 2021 6:45:41 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-30_23_45_40-18233134485701076680?project=apache-beam-testing
    Oct 31, 2021 6:45:41 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-30_23_45_40-18233134485701076680
    Oct 31, 2021 6:45:41 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-30_23_45_40-18233134485701076680
    Oct 31, 2021 6:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-31T06:45:43.861Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 31, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:45:50.568Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 31, 2021 6:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:45:51.566Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 31, 2021 6:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:45:51.787Z: Expanding GroupByKey operations into optimizable parts.
    Oct 31, 2021 6:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:45:51.860Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 31, 2021 6:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:45:51.969Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 31, 2021 6:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:45:52.011Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 31, 2021 6:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:45:52.034Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 31, 2021 6:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:45:52.356Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 31, 2021 6:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:45:52.443Z: Starting 5 workers in us-central1-a...
    Oct 31, 2021 6:46:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:46:17.265Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 31, 2021 6:46:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:46:41.557Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 31, 2021 6:47:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:47:08.562Z: Workers have started successfully.
    Oct 31, 2021 6:47:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:47:08.590Z: Workers have started successfully.
    Oct 31, 2021 6:47:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:47:36.406Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 31, 2021 6:47:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:47:36.550Z: Cleaning up.
    Oct 31, 2021 6:47:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:47:36.629Z: Stopping worker pool...
    Oct 31, 2021 6:49:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:49:59.361Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 31, 2021 6:49:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T06:49:59.401Z: Worker pool stopped.
    Oct 31, 2021 6:50:04 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-30_23_45_40-18233134485701076680 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): dba154f1-7f83-4552-b9e3-ca76d885074d and timestamp: 2021-10-31T06:50:04.828000000Z:
                     Metric:                    Value:
                   read_time                     9.319
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 31, 2021 6:50:05 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 41.228 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 10s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/dijcrzgt7t542

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2610

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2610/display/redirect?page=changes>

Changes:

[noreply] Merge pull request #15753 from [BEAM-13080] Add option in Reshuffle to


------------------------------------------
[...truncated 338.00 KB...]
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 14b8371ba6b0d29011e8f83c564ee45c
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 31, 2021 12:44:49 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 31, 2021 12:44:50 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 31, 2021 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 31, 2021 12:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 12:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 12:44:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 31, 2021 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 12:44:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 31, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@576744789]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 31, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 31, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 31, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 31, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 31, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@2102085350]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 31, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 31, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 31, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 31, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 31, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 31, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 31, 2021 12:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 31, 2021 12:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 31, 2021 12:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-8lDYMA-T6bCO4br-rIbZCyKvMzLEqUzZXHR69IET0wc.jar
    Oct 31, 2021 12:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7047443652272399555.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ZX3BGsszZ0VCEKSwZ20YewebUiQ2wnQsf7X1DaMRB6s.jar
    Oct 31, 2021 12:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.alibaba/fastjson/1.2.69/6cb063f1d527ff65bdbb9ea74888a5ffc3f92197/fastjson-1.2.69.jar to gs://temp-storage-for-perf-tests/loadtests/staging/fastjson-1.2.69-KniRdEoDeOAJmpuB3b7U0uGn6GMvb4b4_8-jg-UeScg.jar
    Oct 31, 2021 12:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Oct 31, 2021 12:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 31, 2021 12:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104893 bytes, hash ecda6e7969e9eac8e48d10e668d86b9af9a19983c08518158a55ccf16f8addc5> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-7NpueWnp6sjkjRDmaNhrmvmhmYPAhRgVilXM8W-K3cU.pb
    Oct 31, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 31, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 31, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 31, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 31, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-30_17_45_02-11466525833592608079?project=apache-beam-testing
    Oct 31, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-30_17_45_02-11466525833592608079
    Oct 31, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-30_17_45_02-11466525833592608079
    Oct 31, 2021 12:45:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-31T00:45:05.959Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 31, 2021 12:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:45:12.551Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 31, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:45:13.375Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 31, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:45:13.405Z: Expanding GroupByKey operations into optimizable parts.
    Oct 31, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:45:13.430Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 31, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:45:13.500Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 31, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:45:13.525Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 31, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:45:13.558Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 31, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:45:13.896Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 31, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:45:13.960Z: Starting 5 workers in us-central1-c...
    Oct 31, 2021 12:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:45:43.174Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 31, 2021 12:46:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:45:59.601Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 31, 2021 12:46:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:46:25.408Z: Workers have started successfully.
    Oct 31, 2021 12:46:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:46:25.434Z: Workers have started successfully.
    Oct 31, 2021 12:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:46:54.474Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 31, 2021 12:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:46:54.620Z: Cleaning up.
    Oct 31, 2021 12:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:46:54.692Z: Stopping worker pool...
    Oct 31, 2021 12:49:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:49:16.425Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 31, 2021 12:49:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-31T00:49:16.460Z: Worker pool stopped.
    Oct 31, 2021 12:49:23 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-30_17_45_02-11466525833592608079 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 1e6807df-df56-4817-a6fc-2b7a1b68e7cd and timestamp: 2021-10-31T00:49:23.187000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.629

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 31, 2021 12:49:23 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 37.261 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 5s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ibd2lmet5kuvk

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2609

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2609/display/redirect>

Changes:


------------------------------------------
[...truncated 337.59 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 14b8371ba6b0d29011e8f83c564ee45c
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 30, 2021 6:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 30, 2021 6:44:53 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 30, 2021 6:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 30, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 30, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 30, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1850632974]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@785615728]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 30, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 30, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 30, 2021 6:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 30, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 30, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-8lDYMA-T6bCO4br-rIbZCyKvMzLEqUzZXHR69IET0wc.jar
    Oct 30, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1100293419112175705.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-JQ7X8EXJUUk8jpxivWfsvAJlU0-NApT423ykPVtZxqE.jar
    Oct 30, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 1 seconds
    Oct 30, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 30, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash be466095046eb815bf7c58ecc4ef48508672e585a9760786b13b09d30ae0ead9> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-vkZglQRuuBW_fFjsxO9IUIZy5YWpdgeGsTsJ0wrg6tk.pb
    Oct 30, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 30, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 30, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 30, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 30, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-30_11_45_07-15044799619051388739?project=apache-beam-testing
    Oct 30, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-30_11_45_07-15044799619051388739
    Oct 30, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-30_11_45_07-15044799619051388739
    Oct 30, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-30T18:45:10.669Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 30, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:45:16.441Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 30, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:45:17.274Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 30, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:45:17.306Z: Expanding GroupByKey operations into optimizable parts.
    Oct 30, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:45:17.340Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 30, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:45:17.400Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 30, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:45:17.431Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 30, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:45:17.463Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 30, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:45:17.758Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 30, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:45:17.828Z: Starting 5 workers in us-central1-a...
    Oct 30, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:45:25.712Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 30, 2021 6:46:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:46:11.138Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 30, 2021 6:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:46:36.079Z: Workers have started successfully.
    Oct 30, 2021 6:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:46:36.102Z: Workers have started successfully.
    Oct 30, 2021 6:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:47:04.334Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 30, 2021 6:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:47:04.468Z: Cleaning up.
    Oct 30, 2021 6:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:47:04.536Z: Stopping worker pool...
    Oct 30, 2021 6:49:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:49:31.648Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 30, 2021 6:49:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T18:49:31.682Z: Worker pool stopped.
    Oct 30, 2021 6:49:37 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-30_11_45_07-15044799619051388739 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 57eae8b2-f3f5-46e7-a3f7-669d4e5b2806 and timestamp: 2021-10-30T18:49:37.802000000Z:
                     Metric:                    Value:
                   read_time                      9.13
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 30, 2021 6:49:38 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 49.153 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 17s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/fqn5rxbnyaawc

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2608

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2608/display/redirect>

Changes:


------------------------------------------
[...truncated 338.23 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 14b8371ba6b0d29011e8f83c564ee45c
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 30, 2021 12:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 30, 2021 12:44:52 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 30, 2021 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 30, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 30, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1812577242]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1156856349]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 30, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 30, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 30, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 30, 2021 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 30, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 30, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-8lDYMA-T6bCO4br-rIbZCyKvMzLEqUzZXHR69IET0wc.jar
    Oct 30, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3829206006658264483.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-6JCsJTYYoMb46kZK-Rw_SMv7J5S5MuMc13Zhnf1aYeQ.jar
    Oct 30, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 30, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 30, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104892 bytes, hash b2881793868dbc923e9e4b311f91ae3bd6e53aba30bf7b5cfe0f440659ae0675> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-sogXk4aNvJI-nksxH5GuO9blOrowv3tc_g9EBlmuBnU.pb
    Oct 30, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 30, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 30, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 30, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 30, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-30_05_45_05-10547139370384166789?project=apache-beam-testing
    Oct 30, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-30_05_45_05-10547139370384166789
    Oct 30, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-30_05_45_05-10547139370384166789
    Oct 30, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-30T12:45:08.910Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 30, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:45:17.709Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 30, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:45:18.427Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 30, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:45:18.467Z: Expanding GroupByKey operations into optimizable parts.
    Oct 30, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:45:18.485Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 30, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:45:18.560Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 30, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:45:18.594Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 30, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:45:18.626Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 30, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:45:19.183Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 30, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:45:19.243Z: Starting 5 workers in us-central1-a...
    Oct 30, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:45:46.302Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 30, 2021 12:46:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:46:04.449Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 30, 2021 12:46:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:46:31.360Z: Workers have started successfully.
    Oct 30, 2021 12:46:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:46:31.396Z: Workers have started successfully.
    Oct 30, 2021 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:46:59.918Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 30, 2021 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:47:00.048Z: Cleaning up.
    Oct 30, 2021 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:47:00.110Z: Stopping worker pool...
    Oct 30, 2021 12:49:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:49:21.101Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 30, 2021 12:49:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T12:49:21.140Z: Worker pool stopped.
    Oct 30, 2021 12:49:37 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-30_05_45_05-10547139370384166789 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 14a22a52-17db-458e-9e7c-275dfcd9ff30 and timestamp: 2021-10-30T12:49:37.629000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.517

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 30, 2021 12:49:37 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 49.319 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 17s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/2ckiu7r2m54sa

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2607

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2607/display/redirect>

Changes:


------------------------------------------
[...truncated 338.82 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 14b8371ba6b0d29011e8f83c564ee45c
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 30, 2021 6:44:50 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 30, 2021 6:44:51 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 30, 2021 6:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 30, 2021 6:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 6:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 30, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 30, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1850632974]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 30, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 30, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 30, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 30, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 30, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@402527372]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 30, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 30, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 30, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 30, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 30, 2021 6:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 30, 2021 6:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 30, 2021 6:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-8lDYMA-T6bCO4br-rIbZCyKvMzLEqUzZXHR69IET0wc.jar
    Oct 30, 2021 6:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6467173426407546027.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ShAbNeJhTpKibmm9fROrOqlHV3JpbdGgdoJWtDcG7Gw.jar
    Oct 30, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 30, 2021 6:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 30, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 38853d9b66e72dd46077ff922f2098d5e15f20bb8f99a9c0274acb779695fa5d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-OIU9m2bnLdRgd_-SLyCY1eFfILuPmanAJ0rLd5aV-l0.pb
    Oct 30, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 30, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 30, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 30, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 30, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-29_23_45_02-11753447552064274059?project=apache-beam-testing
    Oct 30, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-29_23_45_02-11753447552064274059
    Oct 30, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-29_23_45_02-11753447552064274059
    Oct 30, 2021 6:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-30T06:45:07.559Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 30, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:45:14.193Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 30, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:45:14.967Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 30, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:45:15.089Z: Expanding GroupByKey operations into optimizable parts.
    Oct 30, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:45:15.187Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 30, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:45:15.343Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 30, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:45:15.393Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 30, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:45:15.470Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 30, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:45:16.103Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 30, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:45:16.182Z: Starting 5 workers in us-central1-c...
    Oct 30, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:45:26.747Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 30, 2021 6:46:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:46:01.538Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 30, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:46:27.396Z: Workers have started successfully.
    Oct 30, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:46:27.422Z: Workers have started successfully.
    Oct 30, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:46:55.073Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 30, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:46:55.246Z: Cleaning up.
    Oct 30, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:46:55.314Z: Stopping worker pool...
    Oct 30, 2021 6:49:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:49:22.849Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 30, 2021 6:49:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T06:49:22.896Z: Worker pool stopped.
    Oct 30, 2021 6:49:28 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-29_23_45_02-11753447552064274059 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): b416a8d0-972f-4454-9902-35be5998682c and timestamp: 2021-10-30T06:49:28.795000000Z:
                     Metric:                    Value:
                   read_time                     7.355
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 30, 2021 6:49:29 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 42.314 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 11s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/zeqn3rei5rucg

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2606

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2606/display/redirect?page=changes>

Changes:

[Luke Cwik] [BEAM-13015] Migrate bundle processing in the SDK harness to using

[Brian Hulette] [BEAM-13099] Copy vendored calcite build for 1.28.0

[Brian Hulette] [BEAM-13099] Modifications for vendor/calcite-1_28_0 build

[noreply] [BEAM-13001] DoFn metrics for Go SDK (#15657)


------------------------------------------
[...truncated 354.00 KB...]
Gradle Test Executor 5 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 14b8371ba6b0d29011e8f83c564ee45c
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 5'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 5'
Successfully started process 'Gradle Test Executor 5'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 30, 2021 12:56:11 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 30, 2021 12:56:12 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 30, 2021 12:56:13 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 30, 2021 12:56:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 12:56:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 12:56:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 30, 2021 12:56:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 12:56:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 12:56:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1850632974]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@402527372]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 30, 2021 12:56:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 30, 2021 12:56:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 30, 2021 12:56:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 30, 2021 12:56:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 30, 2021 12:56:21 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 30, 2021 12:56:21 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-8lDYMA-T6bCO4br-rIbZCyKvMzLEqUzZXHR69IET0wc.jar
    Oct 30, 2021 12:56:21 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gradle-worker-0NLdms4PlDCbuhdFfquuZqvkjQupN7vd8rr0ZBuhhrw.jar
    Oct 30, 2021 12:56:21 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6662209333345085913.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-2lclFvWKW20kc4X9PqTgETVIc3k02D3GcIqg8FYRrqQ.jar
    Oct 30, 2021 12:56:21 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.beam/beam-vendor-calcite-1_26_0/0.1/5fae4e97a2d8739462bd1572e48d01228766b6ef/beam-vendor-calcite-1_26_0-0.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/beam-vendor-calcite-1_26_0-0.1-pYZ7esxRWyhKmBqBdfrpnxvg8woyykTvGbaCvLtyRyA.jar
    Oct 30, 2021 12:56:22 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 3 files newly uploaded in 1 seconds
    Oct 30, 2021 12:56:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 30, 2021 12:56:22 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 1e76ec5cd0d64801c5c342a99566acffdacb17359bee1509da44e688fef19892> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-HnbsXNDWSAHFw0KplWas_9rLFzWb7hUJ2kTmiP7xmJI.pb
    Oct 30, 2021 12:56:24 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 30, 2021 12:56:25 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 30, 2021 12:56:25 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 30, 2021 12:56:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 30, 2021 12:56:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-29_17_56_25-15640718372973521731?project=apache-beam-testing
    Oct 30, 2021 12:56:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-29_17_56_25-15640718372973521731
    Oct 30, 2021 12:56:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-29_17_56_25-15640718372973521731
    Oct 30, 2021 12:56:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-30T00:56:28.760Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 30, 2021 12:56:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:56:35.857Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 30, 2021 12:56:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:56:36.638Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 30, 2021 12:56:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:56:36.676Z: Expanding GroupByKey operations into optimizable parts.
    Oct 30, 2021 12:56:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:56:36.707Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 30, 2021 12:56:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:56:36.780Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 30, 2021 12:56:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:56:36.801Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 30, 2021 12:56:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:56:36.825Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 30, 2021 12:56:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:56:37.162Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 30, 2021 12:56:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:56:37.228Z: Starting 5 workers in us-central1-a...
    Oct 30, 2021 12:57:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:57:10.439Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 30, 2021 12:57:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:57:23.755Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 30, 2021 12:57:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:57:48.346Z: Workers have started successfully.
    Oct 30, 2021 12:57:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:57:48.374Z: Workers have started successfully.
    Oct 30, 2021 12:58:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:58:17.559Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 30, 2021 12:58:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:58:17.678Z: Cleaning up.
    Oct 30, 2021 12:58:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T00:58:17.775Z: Stopping worker pool...
    Oct 30, 2021 1:00:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T01:00:38.095Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 30, 2021 1:00:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-30T01:00:38.133Z: Worker pool stopped.
    Oct 30, 2021 1:00:43 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-29_17_56_25-15640718372973521731 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 0ec4815a-68d4-43c8-bb52-919b2c5698ab and timestamp: 2021-10-30T01:00:43.691000000Z:
                     Metric:                    Value:
                   read_time                     7.487
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 30, 2021 1:00:44 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 5 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 35.502 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 20s
152 actionable tasks: 106 executed, 46 from cache

Publishing build scan...
https://gradle.com/s/seguozwjd5mom

Stopped 4 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2605

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2605/display/redirect?page=changes>

Changes:

[noreply] [BEAM-8958] Use AWS credentials provider with BasicKinesisProvider

[noreply] [BEAM-13099] Use BlockBuilder.add(..) rather than

[noreply] Minor: Remove broken python compatibility checks (#15828)

[noreply] [BEAM-12047] Updates CHANGES.md to mention the URN convention (#15845)


------------------------------------------
[...truncated 341.07 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ffef37f1f512e2ff72d7df90c2e756c9
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 29, 2021 6:46:02 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 29, 2021 6:46:03 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 29, 2021 6:46:03 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 29, 2021 6:46:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 6:46:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 6:46:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 29, 2021 6:46:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 6:46:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 6:46:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 29, 2021 6:46:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1812577242]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 29, 2021 6:46:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 29, 2021 6:46:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 6:46:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 29, 2021 6:46:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 29, 2021 6:46:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 6:46:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 29, 2021 6:46:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@882830612]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 29, 2021 6:46:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 6:46:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 6:46:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 29, 2021 6:46:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 6:46:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 6:46:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 29, 2021 6:46:10 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 29, 2021 6:46:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 29, 2021 6:46:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 29, 2021 6:46:13 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 29, 2021 6:46:13 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-0fzSn28JJsTbn80D6yUnZdKWbtpLzvqtEb4gXtufqjc.jar
    Oct 29, 2021 6:46:13 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-6U1fnteJpM87FvukrS9vCPZ0ldEVRYGcZemwHs5aJZc.jar
    Oct 29, 2021 6:46:13 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6666071151916232178.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-LrcfuP3Pt6SBQrKQaHskQX9GXsOz1O9UjxS4Uh2EroI.jar
    Oct 29, 2021 6:46:14 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Oct 29, 2021 6:46:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 29, 2021 6:46:14 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 3ae85454b2661e9f6de8ec7341e761635c1a0b1e3fdb9abb158611f9899e5dbc> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-OuhUVLJmHp9t6OxzQedhY1waCx4_25q7FYYR-YmeXbw.pb
    Oct 29, 2021 6:46:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 29, 2021 6:46:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 29, 2021 6:46:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 29, 2021 6:46:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 29, 2021 6:46:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-29_11_46_16-1680074798003155344?project=apache-beam-testing
    Oct 29, 2021 6:46:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-29_11_46_16-1680074798003155344
    Oct 29, 2021 6:46:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-29_11_46_16-1680074798003155344
    Oct 29, 2021 6:46:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-29T18:46:20.058Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 29, 2021 6:46:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:46:28.712Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 29, 2021 6:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:46:29.468Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 29, 2021 6:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:46:29.535Z: Expanding GroupByKey operations into optimizable parts.
    Oct 29, 2021 6:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:46:29.587Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 29, 2021 6:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:46:29.658Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 29, 2021 6:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:46:29.693Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 29, 2021 6:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:46:29.724Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 29, 2021 6:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:46:30.161Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 29, 2021 6:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:46:30.255Z: Starting 5 workers in us-central1-c...
    Oct 29, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:46:57.268Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 29, 2021 6:47:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:47:11.245Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 29, 2021 6:47:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:47:11.278Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 29, 2021 6:47:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:47:21.598Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 29, 2021 6:47:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:47:44.444Z: Workers have started successfully.
    Oct 29, 2021 6:47:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:47:44.478Z: Workers have started successfully.
    Oct 29, 2021 6:48:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:48:18.383Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 29, 2021 6:48:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:48:18.771Z: Cleaning up.
    Oct 29, 2021 6:48:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:48:18.878Z: Stopping worker pool...
    Oct 29, 2021 6:50:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:50:43.818Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 29, 2021 6:50:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T18:50:43.864Z: Worker pool stopped.
    Oct 29, 2021 6:50:50 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-29_11_46_16-1680074798003155344 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c4a03450-614a-4f3f-b4f3-cc4d35f33a35 and timestamp: 2021-10-29T18:50:50.225000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    11.573

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 29, 2021 6:50:50 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 52.19 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 32s
152 actionable tasks: 97 executed, 55 from cache

Publishing build scan...
https://gradle.com/s/pfgi7knuulu7o

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2604

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2604/display/redirect>

Changes:


------------------------------------------
[...truncated 346.29 KB...]
Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 7b3e4c96a207d7f212569f92f13d6cf6
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 29, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 29, 2021 12:45:22 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 29, 2021 12:45:23 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 29, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 12:45:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 29, 2021 12:45:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 12:45:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 12:45:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 29, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1812577242]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 29, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 29, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 29, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 29, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 29, 2021 12:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@882830612]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 29, 2021 12:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 12:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 12:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 29, 2021 12:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 12:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 12:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 29, 2021 12:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 29, 2021 12:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 29, 2021 12:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 29, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 29, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-0fzSn28JJsTbn80D6yUnZdKWbtpLzvqtEb4gXtufqjc.jar
    Oct 29, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8305812740099050526.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-6ycfLpUzFen-qI8LH97jsEq_U2djbFah2Y6k8ol5B94.jar
    Oct 29, 2021 12:45:35 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 29, 2021 12:45:35 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 29, 2021 12:45:35 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 0c4471dd28bd9fe94805f77d2821c712c3f01b180973a7fbc171bb26736c9dad> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-DERx3Si9n-lIBfd9KCHHEsPwGxgJc6f7wXG7JnNsna0.pb
    Oct 29, 2021 12:45:37 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 29, 2021 12:45:37 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 29, 2021 12:45:37 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 29, 2021 12:45:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 29, 2021 12:45:38 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-29_05_45_37-10434285753432183194?project=apache-beam-testing
    Oct 29, 2021 12:45:38 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-29_05_45_37-10434285753432183194
    Oct 29, 2021 12:45:38 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-29_05_45_37-10434285753432183194
    Oct 29, 2021 12:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-29T12:45:43.067Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 29, 2021 12:45:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:45:50.024Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 29, 2021 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:45:51.053Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 29, 2021 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:45:51.241Z: Expanding GroupByKey operations into optimizable parts.
    Oct 29, 2021 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:45:51.306Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 29, 2021 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:45:51.377Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 29, 2021 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:45:51.411Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 29, 2021 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:45:51.474Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 29, 2021 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:45:51.761Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 29, 2021 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:45:51.827Z: Starting 5 workers in us-central1-a...
    Oct 29, 2021 12:45:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:45:56.458Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 29, 2021 12:46:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:46:28.640Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 29, 2021 12:46:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:46:28.669Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 29, 2021 12:46:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:46:39.184Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 29, 2021 12:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:47:03.518Z: Workers have started successfully.
    Oct 29, 2021 12:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:47:03.556Z: Workers have started successfully.
    Oct 29, 2021 12:47:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:47:36.184Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 29, 2021 12:47:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:47:36.329Z: Cleaning up.
    Oct 29, 2021 12:47:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:47:36.414Z: Stopping worker pool...
    Oct 29, 2021 12:50:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:50:10.021Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 29, 2021 12:50:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T12:50:10.259Z: Worker pool stopped.
    Oct 29, 2021 12:50:20 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-29_05_45_37-10434285753432183194 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4875e6fd-1916-4c38-b466-1ce69e34acbf and timestamp: 2021-10-29T12:50:20.168000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.647

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 29, 2021 12:50:20 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 5 mins 2.569 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 59s
152 actionable tasks: 101 executed, 51 from cache

Publishing build scan...
https://gradle.com/s/wryxoerncpd5k

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2603

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2603/display/redirect?page=changes>

Changes:

[artur.khanin] Implemented GetCompileOutput and covered with unit tests

[artur.khanin] Replaced logging library regarding PR comment

[aydar.zaynutdinov] [BEAM-13120][Playground]

[Kyle Weaver] [BEAM-13143] Fix python doc generator error.

[Pablo Estrada] Handle runner-provided shards for TextIO

[noreply] Merge pull request #15721 from [BEAM-13023][Playground] Implement Redis

[noreply] Merge pull request #15802 from [BEAM-12970][Playground] Implement gRPC

[noreply] Merge pull request #15800 from [BEAM-12970][Playground] Implement gRPC


------------------------------------------
[...truncated 342.92 KB...]
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 29, 2021 6:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 29, 2021 6:45:02 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 29, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 29, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 29, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 29, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1812577242]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 29, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 29, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 29, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 29, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 29, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@882830612]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 29, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 29, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 29, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 29, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 29, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-0fzSn28JJsTbn80D6yUnZdKWbtpLzvqtEb4gXtufqjc.jar
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3840560413878150944.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-s_btb8uGFWCFZHueULyLYDbqf-rvKa2-1TgwWnICN5Y.jar
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.testcontainers/kafka/1.15.1/ccd374ca7e5d8b06fb31ecffeb03abc42feada84/kafka-1.15.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/kafka-1.15.1-Z4qZ3jdWXDvdB7xD9VivkdVQiRZWNWUhPSIANVCSZDA.jar
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.testcontainers/testcontainers/1.15.1/91e6dfab8f141f77c6a0dd147a94bd186993a22c/testcontainers-1.15.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/testcontainers-1.15.1-_fMrLpRXSowMnAISRc73bMk5UqrXNnsv6vkPXOZRXu0.jar
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-api/3.2.7/81408fc988c229ea11354fee9902c47842343f04/docker-java-api-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-api-3.2.7-AwtXDacySf3gUHoixVjjNqW0wUI0YcGJjMO-N7kxBho.jar
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/de.flapdoodle.embed/de.flapdoodle.embed.process/2.1.2/986b38302fa10018d5baccee7f655c31ee9afe5b/de.flapdoodle.embed.process-2.1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/de.flapdoodle.embed.process-2.1.2-OasY7D5KRAimcZcWcjFwgi8Qb4B-iff1FfrVeNSih6A.jar
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-transport-zerodep/3.2.7/d74cb05fabc57e9ec441dc39d0bc58ad649fad3d/docker-java-transport-zerodep-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-transport-zerodep-3.2.7-O2a9rUb-899yDZMu4E8j9ItFOkNoggqbLP2xaLmOwJY.jar
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/de.flapdoodle.embed/de.flapdoodle.embed.mongo/2.2.0/781d14f4e3d9eeb0b4c3e00a4ec165a04b3dc5c4/de.flapdoodle.embed.mongo-2.2.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/de.flapdoodle.embed.mongo-2.2.0-vNy3lJC0jW9u4Cy1AHsqSbjRUqOTX9ycpEmHkht7vvk.jar
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-csv/1.8/37ca9a9aa2d4be2599e55506a6d3170dd7a3df4/commons-csv-1.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-csv-1.8-qL1WZS7UZo2dWjOZSuUvWbnjnI6w68tmhOaK7udXmmE.jar
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.janino/janino/3.0.11/e699e368095379ba0402ea1780a87fcaea16e68f/janino-3.0.11.jar to gs://temp-storage-for-perf-tests/loadtests/staging/janino-3.0.11-kje3HSMpGA5ZIQ6aqhAO4xNFTvCuWIYIx1yxkxlZG-E.jar
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.janino/commons-compiler/3.0.11/f2a6ec7fbc929c9fc87ff8bb486c0574951c5b09/commons-compiler-3.0.11.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-compiler-3.0.11-DxpPXyZccBoxkzJErnBF_O8YtPpZUEF-Je5wvlDd2s8.jar
    Oct 29, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-transport/3.2.7/315903a129f530422747efc163dd255f0fa2555e/docker-java-transport-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-transport-3.2.7-ynNXlSXhDSyiLCXkA4c9E9_HGeKB7-wbpEzmswFrI-A.jar
    Oct 29, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 239 files cached, 11 files newly uploaded in 1 seconds
    Oct 29, 2021 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 29, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 63d9ecd05bfe9ee7607f41942fcb076ad0ebc1f570b6b24c7e733de0a250b6e7> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Y9ns0Fv-nudgf0GUL8sHatDrwfVwtrJMfnM94KJQtuc.pb
    Oct 29, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 29, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 29, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 29, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 29, 2021 6:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-28_23_45_16-11740413233974374282?project=apache-beam-testing
    Oct 29, 2021 6:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-28_23_45_16-11740413233974374282
    Oct 29, 2021 6:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-28_23_45_16-11740413233974374282
    Oct 29, 2021 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-29T06:45:20.779Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 29, 2021 6:45:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:45:37.252Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 29, 2021 6:45:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:45:38.673Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 29, 2021 6:45:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:45:39.162Z: Expanding GroupByKey operations into optimizable parts.
    Oct 29, 2021 6:45:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:45:39.293Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 29, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:45:39.692Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 29, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:45:39.875Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 29, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:45:40.052Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 29, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:45:40.792Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 29, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:45:40.874Z: Starting 5 workers in us-central1-c...
    Oct 29, 2021 6:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:45:51.785Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 29, 2021 6:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:46:26.574Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 29, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:46:52.352Z: Workers have started successfully.
    Oct 29, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:46:52.383Z: Workers have started successfully.
    Oct 29, 2021 6:47:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:47:22.380Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 29, 2021 6:47:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:47:22.541Z: Cleaning up.
    Oct 29, 2021 6:47:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:47:22.618Z: Stopping worker pool...
    Oct 29, 2021 6:49:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:49:46.756Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 29, 2021 6:49:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T06:49:46.807Z: Worker pool stopped.
    Oct 29, 2021 6:49:59 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-28_23_45_16-11740413233974374282 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 2ffa4a8e-be28-4f6b-b389-1955df1976d9 and timestamp: 2021-10-29T06:49:59.160000000Z:
                     Metric:                    Value:
                   read_time                     9.594
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 29, 2021 6:49:59 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 5 mins 2.578 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 39s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/4lrmg76nctdg6

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2602

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2602/display/redirect>

Changes:


------------------------------------------
[...truncated 342.77 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is cdb611d8bfd52e82910a87e269abc932
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 29, 2021 12:46:47 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 29, 2021 12:46:49 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 29, 2021 12:46:50 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 29, 2021 12:46:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 12:46:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 12:46:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 29, 2021 12:46:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 12:46:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 12:46:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 29, 2021 12:46:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1363274082]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 29, 2021 12:47:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 29, 2021 12:47:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 12:47:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 29, 2021 12:47:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 29, 2021 12:47:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 12:47:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 29, 2021 12:47:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@758955551]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 29, 2021 12:47:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 12:47:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 29, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 29, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 29, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 29, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 29, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 29, 2021 12:47:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 29, 2021 12:47:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 29, 2021 12:47:09 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-0fzSn28JJsTbn80D6yUnZdKWbtpLzvqtEb4gXtufqjc.jar
    Oct 29, 2021 12:47:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6331128248829274349.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-mtI5owoS0u5ESDk8500ymZ6nvb70IsnApMyjSVZrz8c.jar
    Oct 29, 2021 12:47:12 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 3 seconds
    Oct 29, 2021 12:47:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 29, 2021 12:47:12 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104893 bytes, hash 323429202f7c910a9d7698eb73776a2fd060bac8894c53f837a6f3907e1f4920> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-MjQpIC98kQqddpjrc3dqL9BgusiJTFP4N6bzkH4fSSA.pb
    Oct 29, 2021 12:47:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 29, 2021 12:47:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 29, 2021 12:47:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 29, 2021 12:47:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 29, 2021 12:47:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-28_17_47_17-16482579381554673905?project=apache-beam-testing
    Oct 29, 2021 12:47:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-28_17_47_17-16482579381554673905
    Oct 29, 2021 12:47:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-28_17_47_17-16482579381554673905
    Oct 29, 2021 12:47:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-29T00:47:20.302Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 29, 2021 12:47:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:47:32.279Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 29, 2021 12:47:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:47:33.012Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 29, 2021 12:47:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:47:33.044Z: Expanding GroupByKey operations into optimizable parts.
    Oct 29, 2021 12:47:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:47:33.078Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 29, 2021 12:47:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:47:33.163Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 29, 2021 12:47:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:47:33.196Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 29, 2021 12:47:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:47:33.229Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 29, 2021 12:47:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:47:33.634Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 29, 2021 12:47:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:47:34.080Z: Starting 5 workers in us-central1-c...
    Oct 29, 2021 12:47:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:47:35.347Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 29, 2021 12:48:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:48:13.283Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 29, 2021 12:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:48:38.878Z: Workers have started successfully.
    Oct 29, 2021 12:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:48:38.913Z: Workers have started successfully.
    Oct 29, 2021 12:49:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:49:08.159Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 29, 2021 12:49:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:49:09.361Z: Cleaning up.
    Oct 29, 2021 12:49:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:49:09.594Z: Stopping worker pool...
    Oct 29, 2021 12:51:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:51:33.932Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 29, 2021 12:51:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-29T00:51:33.968Z: Worker pool stopped.
    Oct 29, 2021 12:51:42 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-28_17_47_17-16482579381554673905 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 7f8ff1b9-0911-41b7-af48-5b0f4c406a67 and timestamp: 2021-10-29T00:51:42.150000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.203

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 29, 2021 12:51:42 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 5 mins 2.872 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 14s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/d2xgv37gtklc4

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2601

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2601/display/redirect?page=changes>

Changes:

[daria.malkova] fix executor tests

[mmack] [BEAM-13138] Update code to retrieve container endpoint

[noreply] Merge pull request #15709 from [BEAM-12967] [Playground] Create Example

[noreply] Merge pull request #15294 from [BEAM-11986] Spanner write metric

[noreply] [BEAM-13130] Remove persistent references to stateKeyReaders (#15815)

[Brian Hulette] [BEAM-13099] Replace call to RelNode.metadata with BeamRelMetadataQuery

[Brian Hulette] [BEAM-13099] Update all rels to use BeamRelMetadataQuery


------------------------------------------
[...truncated 341.88 KB...]
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is cdb611d8bfd52e82910a87e269abc932
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 28, 2021 6:46:14 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 28, 2021 6:46:15 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 28, 2021 6:46:15 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 28, 2021 6:46:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 6:46:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 6:46:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 28, 2021 6:46:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 6:46:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 6:46:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 28, 2021 6:46:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1850632974]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 28, 2021 6:46:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 28, 2021 6:46:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 6:46:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 28, 2021 6:46:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 28, 2021 6:46:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 6:46:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 28, 2021 6:46:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@785615728]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:80)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:41)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 28, 2021 6:46:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 6:46:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 6:46:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 28, 2021 6:46:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 6:46:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 6:46:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 28, 2021 6:46:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 28, 2021 6:46:21 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 28, 2021 6:46:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 28, 2021 6:46:24 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 28, 2021 6:46:24 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-0fzSn28JJsTbn80D6yUnZdKWbtpLzvqtEb4gXtufqjc.jar
    Oct 28, 2021 6:46:24 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-pyg7GrYof-707XaIUnVsamEoMf000KZSZ062WgHP3oI.jar
    Oct 28, 2021 6:46:24 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2875733142419718918.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-DDEGeEq_m0K79oEeP8hb4dezS_wGXS479xC3KV7qjsM.jar
    Oct 28, 2021 6:46:24 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-tests--bul8qLquE7A_FJqWm2fDi-wEYmamMPx4IVgBJLFIJc.jar
    Oct 28, 2021 6:46:25 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 3 files newly uploaded in 0 seconds
    Oct 28, 2021 6:46:25 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 28, 2021 6:46:25 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 56703fd3425e8fea04ff5229d403e025d7adb6f9b8e78c8cd6a4727593fde6d2> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-VnA_00Jej-oE_1Ip1APgJdettvm454yM1qRydZP95tI.pb
    Oct 28, 2021 6:46:27 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 28, 2021 6:46:27 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 28, 2021 6:46:27 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 28, 2021 6:46:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 28, 2021 6:46:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-28_11_46_28-11445409060149932431?project=apache-beam-testing
    Oct 28, 2021 6:46:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-28_11_46_28-11445409060149932431
    Oct 28, 2021 6:46:28 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-28_11_46_28-11445409060149932431
    Oct 28, 2021 6:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-28T18:46:31.079Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 28, 2021 6:46:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:46:39.756Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 28, 2021 6:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:46:40.608Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 28, 2021 6:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:46:40.649Z: Expanding GroupByKey operations into optimizable parts.
    Oct 28, 2021 6:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:46:40.679Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 28, 2021 6:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:46:40.802Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 28, 2021 6:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:46:40.854Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 28, 2021 6:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:46:40.897Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 28, 2021 6:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:46:41.287Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 28, 2021 6:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:46:41.373Z: Starting 5 workers in us-central1-c...
    Oct 28, 2021 6:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:46:56.442Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 28, 2021 6:47:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:47:27.321Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 28, 2021 6:47:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:47:52.749Z: Workers have started successfully.
    Oct 28, 2021 6:47:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:47:52.786Z: Workers have started successfully.
    Oct 28, 2021 6:48:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:48:23.630Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 28, 2021 6:48:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:48:23.773Z: Cleaning up.
    Oct 28, 2021 6:48:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:48:23.869Z: Stopping worker pool...
    Oct 28, 2021 6:50:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:50:48.807Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 28, 2021 6:50:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T18:50:48.844Z: Worker pool stopped.
    Oct 28, 2021 6:50:53 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-28_11_46_28-11445409060149932431 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): bfc0952f-8f56-47a5-922c-3994641caac2 and timestamp: 2021-10-28T18:50:54.023000000Z:
                     Metric:                    Value:
                   read_time                     9.587
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 28, 2021 6:50:54 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 43.738 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 32s
152 actionable tasks: 99 executed, 53 from cache

Publishing build scan...
https://gradle.com/s/dmwrb4pxpg6ym

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2600

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2600/display/redirect>

Changes:


------------------------------------------
[...truncated 337.07 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.
Gradle Test Executor 67 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 5eb7566f872e9ad0db84f74cdbe95784
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 67'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 67'
Successfully started process 'Gradle Test Executor 67'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 28, 2021 12:44:37 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 28, 2021 12:44:38 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 28, 2021 12:44:38 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 28, 2021 12:44:41 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 12:44:41 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 12:44:42 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 28, 2021 12:44:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 12:44:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 12:44:42 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 28, 2021 12:44:42 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@998764102]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 12:44:43 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 28, 2021 12:44:44 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 28, 2021 12:44:44 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 28, 2021 12:44:44 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 28, 2021 12:44:47 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 28, 2021 12:44:47 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-0fzSn28JJsTbn80D6yUnZdKWbtpLzvqtEb4gXtufqjc.jar
    Oct 28, 2021 12:44:47 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7560152266624824864.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-GwAWYyY-angsnSTYdL76SchfdCUGrTr78TophpKj5_4.jar
    Oct 28, 2021 12:44:47 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 28, 2021 12:44:47 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 28, 2021 12:44:47 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 09553327c88a9b4bf549a3cddf4c47e2df57086a2294168f1542b05da6b0eb6a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-CVUzJ8iKm0v1SaPN30xH4t9XCGoilBaPFUKwXaaw62o.pb
    Oct 28, 2021 12:44:49 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 28, 2021 12:44:50 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 28, 2021 12:44:50 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 28, 2021 12:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 28, 2021 12:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-28_05_44_50-18135648208084954008?project=apache-beam-testing
    Oct 28, 2021 12:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-28_05_44_50-18135648208084954008
    Oct 28, 2021 12:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-28_05_44_50-18135648208084954008
    Oct 28, 2021 12:44:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-28T12:44:53.562Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 28, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:45:02.666Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 28, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:45:03.451Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 28, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:45:03.534Z: Expanding GroupByKey operations into optimizable parts.
    Oct 28, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:45:03.587Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 28, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:45:03.735Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 28, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:45:03.787Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 28, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:45:03.832Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 28, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:45:04.370Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 28, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:45:04.460Z: Starting 5 workers in us-central1-c...
    Oct 28, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:45:30.174Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 28, 2021 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:45:49.666Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 28, 2021 12:46:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:46:16.361Z: Workers have started successfully.
    Oct 28, 2021 12:46:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:46:16.403Z: Workers have started successfully.
    Oct 28, 2021 12:46:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:46:49.832Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 28, 2021 12:46:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:46:50.049Z: Cleaning up.
    Oct 28, 2021 12:46:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:46:50.198Z: Stopping worker pool...
    Oct 28, 2021 12:49:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:49:09.690Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 28, 2021 12:49:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T12:49:09.744Z: Worker pool stopped.
    Oct 28, 2021 12:49:17 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-28_05_44_50-18135648208084954008 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 17a4f0a8-d078-4155-87d8-c28eecc63f23 and timestamp: 2021-10-28T12:49:17.227000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    12.409

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 28, 2021 12:49:18 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 67 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.004 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.003 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 44.642 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 1s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/bh6g37te72cvg

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2599

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2599/display/redirect?page=changes>

Changes:

[aydar.zaynutdinov] [BEAM-12970][Playground]

[aydar.zaynutdinov] [BEAM-12970][Playground]

[eugene.nikolayev] Fix Python docs build pre-commit failures.

[aydar.zaynutdinov] [BEAM-12970][Playground]

[aydar.zaynutdinov] [BEAM-12970][Playground]


------------------------------------------
[...truncated 339.83 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 5eb7566f872e9ad0db84f74cdbe95784
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 28, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 28, 2021 6:45:13 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 28, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 28, 2021 6:45:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 6:45:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 6:45:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 28, 2021 6:45:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 6:45:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 6:45:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1658417926]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 28, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 28, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 28, 2021 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 28, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 28, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4787234441660124364.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-DaFQrDhZSQVckGS-jAI6xlIbcKHYPsDJE7j-bgZxato.jar
    Oct 28, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-0fzSn28JJsTbn80D6yUnZdKWbtpLzvqtEb4gXtufqjc.jar
    Oct 28, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 28, 2021 6:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 28, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 9b09f064391c9933764fa0a140aafa200ebaeabd273f4e84f31ae1b3fd0033e8> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-mwnwZDkcmTN2T6ChQKr6IA666r0nP06E8xrhs_0AM-g.pb
    Oct 28, 2021 6:45:25 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 28, 2021 6:45:26 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 28, 2021 6:45:26 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 28, 2021 6:45:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 28, 2021 6:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-27_23_45_26-12003482978697856457?project=apache-beam-testing
    Oct 28, 2021 6:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-27_23_45_26-12003482978697856457
    Oct 28, 2021 6:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-27_23_45_26-12003482978697856457
    Oct 28, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-28T06:45:29.481Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 28, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:45:34.627Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 28, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:45:35.417Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 28, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:45:35.454Z: Expanding GroupByKey operations into optimizable parts.
    Oct 28, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:45:35.483Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 28, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:45:35.535Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 28, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:45:35.567Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 28, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:45:35.603Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 28, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:45:35.960Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 28, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:45:36.067Z: Starting 5 workers in us-central1-a...
    Oct 28, 2021 6:46:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:46:07.499Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 28, 2021 6:46:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:46:22.238Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 28, 2021 6:46:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:46:46.952Z: Workers have started successfully.
    Oct 28, 2021 6:46:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:46:46.985Z: Workers have started successfully.
    Oct 28, 2021 6:47:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:47:14.418Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 28, 2021 6:47:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:47:14.559Z: Cleaning up.
    Oct 28, 2021 6:47:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:47:14.627Z: Stopping worker pool...
    Oct 28, 2021 6:49:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:49:46.788Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 28, 2021 6:49:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T06:49:46.855Z: Worker pool stopped.
    Oct 28, 2021 6:49:53 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-27_23_45_26-12003482978697856457 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d748ecc5-b726-4239-a0e9-9d31524f5959 and timestamp: 2021-10-28T06:49:53.359000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                      8.46

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 28, 2021 6:49:53 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 45.485 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 35s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/35hc56imup2ns

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2598

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2598/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13066, BEAM-13087] Workaround for coder type combiner packing

[noreply] [BEAM-13066] Disable using abstract iterable by default. (#15805)


------------------------------------------
[...truncated 340.58 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 5eb7566f872e9ad0db84f74cdbe95784
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 28, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 28, 2021 12:45:17 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 28, 2021 12:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 28, 2021 12:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 12:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 28, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 28, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 28, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 28, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 28, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 28, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 28, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 28, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 28, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 28, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 28, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 28, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 28, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 28, 2021 12:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 28, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 28, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-0fzSn28JJsTbn80D6yUnZdKWbtpLzvqtEb4gXtufqjc.jar
    Oct 28, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5598585220275370564.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-BZ6VU6Xzg0o23EE2nYrr8Xbo_FKn6WYjIdMhpPUuLwA.jar
    Oct 28, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 28, 2021 12:45:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 28, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104889 bytes, hash 5984e1dd14a50e50345099a88b05ed4873b34fc45b5756a7c653e7aa645b4852> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-WYTh3RSlDlA0UJmoiwXtSHOzT8RbV1anxlPnqmRbSFI.pb
    Oct 28, 2021 12:45:30 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 28, 2021 12:45:31 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 28, 2021 12:45:31 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 28, 2021 12:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 28, 2021 12:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-27_17_45_31-14969144643146501529?project=apache-beam-testing
    Oct 28, 2021 12:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-27_17_45_31-14969144643146501529
    Oct 28, 2021 12:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-27_17_45_31-14969144643146501529
    Oct 28, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-28T00:45:34.473Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 28, 2021 12:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:45:42.543Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 28, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:45:43.345Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 28, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:45:43.410Z: Expanding GroupByKey operations into optimizable parts.
    Oct 28, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:45:43.443Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 28, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:45:43.529Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 28, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:45:43.571Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 28, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:45:43.606Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 28, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:45:44.109Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 28, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:45:44.191Z: Starting 5 workers in us-central1-c...
    Oct 28, 2021 12:46:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:46:08.576Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 28, 2021 12:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:46:23.702Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 28, 2021 12:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:46:49.122Z: Workers have started successfully.
    Oct 28, 2021 12:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:46:49.158Z: Workers have started successfully.
    Oct 28, 2021 12:47:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:47:17.184Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 28, 2021 12:47:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:47:17.353Z: Cleaning up.
    Oct 28, 2021 12:47:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:47:17.474Z: Stopping worker pool...
    Oct 28, 2021 12:49:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:49:40.020Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 28, 2021 12:49:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-28T00:49:40.089Z: Worker pool stopped.
    Oct 28, 2021 12:49:46 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-27_17_45_31-14969144643146501529 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): e909d565-a8ed-4b80-bc76-e9c3807411b7 and timestamp: 2021-10-28T00:49:46.751000000Z:
                     Metric:                    Value:
                   read_time                     6.924
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 28, 2021 12:49:47 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 34.207 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 27s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/iantowjpkvvx6

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2597

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2597/display/redirect?page=changes>

Changes:

[relax] support json

[Andrew Pilloud] [BEAM-13118] Don't force push to master

[noreply] Merge pull request #15731: [BEAM-13067]  Mark GroupIntoBatches output as


------------------------------------------
[...truncated 340.65 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 5eb7566f872e9ad0db84f74cdbe95784
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 27, 2021 6:54:41 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 27, 2021 6:54:42 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 27, 2021 6:54:42 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 27, 2021 6:54:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 6:54:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 6:54:46 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 27, 2021 6:54:46 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 6:54:46 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 6:54:47 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 27, 2021 6:54:47 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1658417926]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 27, 2021 6:54:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 27, 2021 6:54:49 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 27, 2021 6:54:52 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 27, 2021 6:54:52 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-0fzSn28JJsTbn80D6yUnZdKWbtpLzvqtEb4gXtufqjc.jar
    Oct 27, 2021 6:54:53 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8356130489921905775.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-EvmbEY4m5Fwb6WA3h20VrKyaHbAJF8O4T28n7DJxMc4.jar
    Oct 27, 2021 6:54:53 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 27, 2021 6:54:53 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 27, 2021 6:54:53 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 2951c5cbb7d2f241b4cd8720b8b6b6894f53b72c98e72adab7bf82cca305472b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-KVHFy7fS8kG0zYcguLa2iU9TtyyY5yrat7-CzKMFRys.pb
    Oct 27, 2021 6:54:55 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 27, 2021 6:54:56 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 27, 2021 6:54:56 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 27, 2021 6:54:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 27, 2021 6:54:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-27_11_54_56-253676225406959320?project=apache-beam-testing
    Oct 27, 2021 6:54:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-27_11_54_56-253676225406959320
    Oct 27, 2021 6:54:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-27_11_54_56-253676225406959320
    Oct 27, 2021 6:55:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-27T18:54:59.490Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 27, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:55:05.216Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 27, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:55:06.046Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 27, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:55:06.074Z: Expanding GroupByKey operations into optimizable parts.
    Oct 27, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:55:06.098Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 27, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:55:06.145Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 27, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:55:06.164Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 27, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:55:06.189Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 27, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:55:06.439Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 27, 2021 6:55:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:55:06.505Z: Starting 5 workers in us-central1-c...
    Oct 27, 2021 6:55:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:55:14.765Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 27, 2021 6:55:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:55:48.336Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 27, 2021 6:56:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:56:14.619Z: Workers have started successfully.
    Oct 27, 2021 6:56:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:56:14.648Z: Workers have started successfully.
    Oct 27, 2021 6:56:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:56:44.564Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 27, 2021 6:56:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:56:44.701Z: Cleaning up.
    Oct 27, 2021 6:56:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:56:44.755Z: Stopping worker pool...
    Oct 27, 2021 6:59:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:59:11.801Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 27, 2021 6:59:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T18:59:11.836Z: Worker pool stopped.
    Oct 27, 2021 6:59:17 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-27_11_54_56-253676225406959320 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 0acc9e99-6318-4ba7-99bc-9f208f29a8b7 and timestamp: 2021-10-27T18:59:17.677000000Z:
                     Metric:                    Value:
                   read_time                     8.821
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 27, 2021 6:59:18 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 40.766 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 33s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/boio2beypbzki

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2596

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2596/display/redirect?page=changes>

Changes:

[mmack] [BEAM-8958] Use AWS credentials provider with BasicKinesisProvider (AWS


------------------------------------------
[...truncated 338.19 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 6bf5391d49e2e1a5a9437f0674ac4d43
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 27, 2021 12:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 27, 2021 12:45:00 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 27, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 27, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 27, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 27, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 27, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 27, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 27, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 27, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-Q7KcmHeN9oWjmrz9t6eemA5BTEfZSF7ff75URlOqem8.jar
    Oct 27, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6083070067652228216.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-G-WChelNsnAhHz9nSpygxHAtd3jeqfJnFy1YNnNXPKA.jar
    Oct 27, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 27, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 27, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 19f61bcb318900ae56e6668214b12d9848b83c7bd49402b1df20ab01e7f363cd> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-GfYbyzGJAK5W5maCFLEtmEi4PHvUlAKx3yCrAefzY80.pb
    Oct 27, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 27, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 27, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 27, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 27, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-27_05_45_14-15978571298395169035?project=apache-beam-testing
    Oct 27, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-27_05_45_14-15978571298395169035
    Oct 27, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-27_05_45_14-15978571298395169035
    Oct 27, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-27T12:45:17.543Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 27, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:45:22.980Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 27, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:45:23.858Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 27, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:45:23.906Z: Expanding GroupByKey operations into optimizable parts.
    Oct 27, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:45:23.943Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 27, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:45:24.030Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 27, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:45:24.065Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 27, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:45:24.113Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 27, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:45:24.507Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 27, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:45:24.586Z: Starting 5 workers in us-central1-a...
    Oct 27, 2021 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:45:38.918Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 27, 2021 12:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:46:20.845Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 27, 2021 12:46:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:46:47.882Z: Workers have started successfully.
    Oct 27, 2021 12:46:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:46:47.923Z: Workers have started successfully.
    Oct 27, 2021 12:47:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:47:16.089Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 27, 2021 12:47:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:47:16.206Z: Cleaning up.
    Oct 27, 2021 12:47:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:47:16.294Z: Stopping worker pool...
    Oct 27, 2021 12:49:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:49:38.626Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 27, 2021 12:49:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T12:49:38.674Z: Worker pool stopped.
    Oct 27, 2021 12:49:47 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-27_05_45_14-15978571298395169035 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 60c7d0f0-d613-44b6-98f2-970d0e76913d and timestamp: 2021-10-27T12:49:47.848000000Z:
                     Metric:                    Value:
                   read_time                     9.143
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 27, 2021 12:49:48 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 52.334 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 29s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/flyqjbe5yx4qs

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2595

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2595/display/redirect?page=changes>

Changes:

[heejong] [BEAM-12978] Customizable dependency for Java external transform

[heejong] add ConfigT param to getDependencies method


------------------------------------------
[...truncated 352.20 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.
Gradle Test Executor 5 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 6bf5391d49e2e1a5a9437f0674ac4d43
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 5'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 5'
Successfully started process 'Gradle Test Executor 5'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 27, 2021 6:47:20 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 27, 2021 6:47:21 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 27, 2021 6:47:22 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 27, 2021 6:47:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 6:47:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 6:47:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 27, 2021 6:47:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 6:47:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 6:47:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 27, 2021 6:47:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1658417926]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 6:47:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 27, 2021 6:47:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 27, 2021 6:47:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 27, 2021 6:47:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 27, 2021 6:47:31 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 27, 2021 6:47:32 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-Q7KcmHeN9oWjmrz9t6eemA5BTEfZSF7ff75URlOqem8.jar
    Oct 27, 2021 6:47:32 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1125005737521853143.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-6CejtyXlnz_x_I5eE5ffhxfTYe5KlPH8WMTSxJ6h4xk.jar
    Oct 27, 2021 6:47:32 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 27, 2021 6:47:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 27, 2021 6:47:32 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash ddb448d9b010f9b2534dcd51002aa98e499f9bd6c6f090c1892db5aa26425451> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-3bRI2bAQ-bJTTc1RACqpjkmfm9bG8JDBiS21qiZCVFE.pb
    Oct 27, 2021 6:47:35 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 27, 2021 6:47:35 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 27, 2021 6:47:35 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 27, 2021 6:47:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 27, 2021 6:47:36 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-26_23_47_35-6282463995481441555?project=apache-beam-testing
    Oct 27, 2021 6:47:36 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-26_23_47_35-6282463995481441555
    Oct 27, 2021 6:47:36 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-26_23_47_35-6282463995481441555
    Oct 27, 2021 6:47:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-27T06:47:38.747Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 27, 2021 6:47:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:47:44.067Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 27, 2021 6:47:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:47:44.746Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 27, 2021 6:47:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:47:44.776Z: Expanding GroupByKey operations into optimizable parts.
    Oct 27, 2021 6:47:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:47:44.807Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 27, 2021 6:47:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:47:44.881Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 27, 2021 6:47:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:47:44.905Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 27, 2021 6:47:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:47:44.937Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 27, 2021 6:47:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:47:45.259Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 27, 2021 6:47:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:47:45.336Z: Starting 5 workers in us-central1-a...
    Oct 27, 2021 6:48:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:48:04.879Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 27, 2021 6:48:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:48:28.879Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 27, 2021 6:48:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:48:57.013Z: Workers have started successfully.
    Oct 27, 2021 6:48:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:48:57.046Z: Workers have started successfully.
    Oct 27, 2021 6:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:49:23.197Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 27, 2021 6:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:49:23.352Z: Cleaning up.
    Oct 27, 2021 6:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:49:23.427Z: Stopping worker pool...
    Oct 27, 2021 6:51:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:51:53.315Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 27, 2021 6:51:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T06:51:53.351Z: Worker pool stopped.
    Oct 27, 2021 6:51:58 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-26_23_47_35-6282463995481441555 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 9f474d3d-6267-4c5e-a9bf-cf1cdcbb4e41 and timestamp: 2021-10-27T06:51:58.748000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.579

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 27, 2021 6:51:59 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 5 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.037 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.049 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 42.557 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 29s
152 actionable tasks: 105 executed, 47 from cache

Publishing build scan...
https://gradle.com/s/uuldaalp4e464

Stopped 4 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2594

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2594/display/redirect?page=changes>

Changes:

[melissapa] [BEAM-11758] Update basics page: Splittable DoFn

[melissapa] Address review feedback

[noreply] [BEAM-4149] Ensure that we always provide and require the worker id.

[noreply] [BEAM-13098] Fix translation of repeated TableRow fields (#15779)


------------------------------------------
[...truncated 340.56 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 509abcf44d00236d233812d1ca309722
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 27, 2021 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 27, 2021 12:45:08 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 27, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 27, 2021 12:45:11 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 12:45:11 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 12:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 27, 2021 12:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 12:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 12:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@311251360]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 27, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 27, 2021 12:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 27, 2021 12:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 27, 2021 12:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 27, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 27, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-Q7KcmHeN9oWjmrz9t6eemA5BTEfZSF7ff75URlOqem8.jar
    Oct 27, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6688066615625574093.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-8dZw05SraGBXmGFC-CUKDNtEF8079vcIZPpOvd2_RnA.jar
    Oct 27, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 27, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 27, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 16dfe6f87483c41b4cf30f2a0e14ee270f5ac593ebf24499753c8b780b7a077a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Ft_m-HSDxBtM8w8qDhTuJw9axZPr8kSZdTyLeAt6B3o.pb
    Oct 27, 2021 12:45:20 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 27, 2021 12:45:20 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 27, 2021 12:45:20 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 27, 2021 12:45:20 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 27, 2021 12:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-26_17_45_20-11289818588678210841?project=apache-beam-testing
    Oct 27, 2021 12:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-26_17_45_20-11289818588678210841
    Oct 27, 2021 12:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-26_17_45_20-11289818588678210841
    Oct 27, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-27T00:45:23.701Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 27, 2021 12:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:45:31.587Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 27, 2021 12:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:45:32.486Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 27, 2021 12:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:45:32.571Z: Expanding GroupByKey operations into optimizable parts.
    Oct 27, 2021 12:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:45:32.597Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 27, 2021 12:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:45:32.677Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 27, 2021 12:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:45:32.730Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 27, 2021 12:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:45:32.766Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 27, 2021 12:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:45:33.312Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 27, 2021 12:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:45:33.405Z: Starting 5 workers in us-central1-c...
    Oct 27, 2021 12:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:45:38.847Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 27, 2021 12:46:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:46:14.152Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 27, 2021 12:46:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:46:40.065Z: Workers have started successfully.
    Oct 27, 2021 12:46:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:46:40.131Z: Workers have started successfully.
    Oct 27, 2021 12:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:47:07.223Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 27, 2021 12:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:47:07.386Z: Cleaning up.
    Oct 27, 2021 12:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:47:07.473Z: Stopping worker pool...
    Oct 27, 2021 12:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:49:25.623Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 27, 2021 12:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-27T00:49:25.671Z: Worker pool stopped.
    Oct 27, 2021 12:49:33 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-26_17_45_20-11289818588678210841 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 49065fb8-a289-4515-9290-47942996a32e and timestamp: 2021-10-27T00:49:33.068000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.649

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 27, 2021 12:49:33 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 29.193 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 13s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/wivncqkpu6f7y

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2593

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2593/display/redirect?page=changes>

Changes:

[baeminbo] [BEAM-6721] Set numShards dynamically for TextIO.write()

[Alexey Romanenko] [BEAM-13104] ParquetIO: SplitReadFn must read the whole block

[noreply] Change sql.Options to an interface under sqlx. (#15790)


------------------------------------------
[...truncated 348.98 KB...]
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8a5effdddaa8746e2d7f7e13a45fb337
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 26, 2021 6:50:08 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 26, 2021 6:50:14 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 26, 2021 6:50:16 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 26, 2021 6:50:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 6:50:32 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 6:50:37 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 26, 2021 6:50:37 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 6:50:37 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 6:50:40 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 26, 2021 6:50:42 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1970546799]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 26, 2021 6:50:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 26, 2021 6:50:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 6:50:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 26, 2021 6:50:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 26, 2021 6:50:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 6:50:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 26, 2021 6:50:46 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@181648722]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 26, 2021 6:50:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 6:50:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 6:50:47 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 26, 2021 6:50:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 6:50:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 6:50:47 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 26, 2021 6:50:49 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 26, 2021 6:50:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 26, 2021 6:50:50 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 26, 2021 6:51:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 26, 2021 6:51:00 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-G61PilMdlDdF-JxG3AftmHgpano1A2AXY4yoE7Tp0qM.jar
    Oct 26, 2021 6:51:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2703572330249305303.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-d8IvSdqPlnGwG9XFFuETwgT-Iabdz_B4fI2mNGqeffc.jar
    Oct 26, 2021 6:51:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 5 seconds
    Oct 26, 2021 6:51:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 26, 2021 6:51:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash aa588c8de4c9395979f2e52247d03243ab91495f1ef528cb875824f3e1e337d4> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-qliMjeTJOVl58uUiR9AyQ6uRSV8e9SjLh1gk8-HjN9Q.pb
    Oct 26, 2021 6:51:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 26, 2021 6:51:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 26, 2021 6:51:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 26, 2021 6:51:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 26, 2021 6:51:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-26_11_51_16-15819183002633272158?project=apache-beam-testing
    Oct 26, 2021 6:51:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-26_11_51_16-15819183002633272158
    Oct 26, 2021 6:51:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-26_11_51_16-15819183002633272158
    Oct 26, 2021 6:51:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-26T18:51:19.876Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 26, 2021 6:51:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:51:25.886Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 26, 2021 6:51:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:51:26.739Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 26, 2021 6:51:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:51:26.777Z: Expanding GroupByKey operations into optimizable parts.
    Oct 26, 2021 6:51:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:51:26.806Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 26, 2021 6:51:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:51:26.872Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 26, 2021 6:51:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:51:26.906Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 26, 2021 6:51:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:51:26.929Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 26, 2021 6:51:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:51:27.302Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 26, 2021 6:51:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:51:27.385Z: Starting 5 workers in us-central1-a...
    Oct 26, 2021 6:52:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:51:59.019Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 26, 2021 6:52:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:52:13.223Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Oct 26, 2021 6:52:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:52:13.260Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Oct 26, 2021 6:52:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:52:23.640Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 26, 2021 6:52:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:52:41.480Z: Workers have started successfully.
    Oct 26, 2021 6:52:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:52:41.507Z: Workers have started successfully.
    Oct 26, 2021 6:53:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:53:11.612Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 26, 2021 6:53:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:53:11.752Z: Cleaning up.
    Oct 26, 2021 6:53:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:53:11.821Z: Stopping worker pool...
    Oct 26, 2021 6:55:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:55:28.791Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 26, 2021 6:55:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T18:55:28.830Z: Worker pool stopped.
    Oct 26, 2021 6:55:37 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-26_11_51_16-15819183002633272158 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6128655d-a700-4ef8-87d6-ed21a7acecbd and timestamp: 2021-10-26T18:55:37.598000000Z:
                     Metric:                    Value:
                   read_time                     9.969
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 26, 2021 6:55:38 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.042 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 5 mins 50.177 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 10m 37s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/lvzmfvumwusb6

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2592

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2592/display/redirect?page=changes>

Changes:

[noreply] Merge pull request #15744 from [BEAM-13072][Playground] Executor builder


------------------------------------------
[...truncated 337.26 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ab2087c21b1e2b8cf9f9a8656aead894
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 26, 2021 12:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 26, 2021 12:44:51 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 26, 2021 12:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 26, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 26, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 26, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 26, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 26, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 26, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 26, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 26, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 26, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 26, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 26, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 26, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 26, 2021 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 26, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 26, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-G61PilMdlDdF-JxG3AftmHgpano1A2AXY4yoE7Tp0qM.jar
    Oct 26, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6050859197808693486.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ZCR6r3qW-kGRdDEzDhelvP8fTXza1crsnuKBelhYVF4.jar
    Oct 26, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 26, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 26, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 21f983668a6ff0d4dcf11159d9899512b6c9e357c3a7c7474d0cf207cb679cd3> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-IfmDZopv8NTc8RFZ2YmVErbJ41fDp8dHTQzyB8tnnNM.pb
    Oct 26, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 26, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 26, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 26, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 26, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-26_05_45_04-12110504230520579641?project=apache-beam-testing
    Oct 26, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-26_05_45_04-12110504230520579641
    Oct 26, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-26_05_45_04-12110504230520579641
    Oct 26, 2021 12:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-26T12:45:08.122Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 26, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:45:21.224Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 26, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:45:22.028Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 26, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:45:22.063Z: Expanding GroupByKey operations into optimizable parts.
    Oct 26, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:45:22.102Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 26, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:45:22.181Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 26, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:45:22.211Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 26, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:45:22.243Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 26, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:45:22.559Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 26, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:45:22.656Z: Starting 5 workers in us-central1-a...
    Oct 26, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:45:43.306Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 26, 2021 12:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:46:11.749Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 26, 2021 12:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:46:36.527Z: Workers have started successfully.
    Oct 26, 2021 12:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:46:36.558Z: Workers have started successfully.
    Oct 26, 2021 12:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:47:03.924Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 26, 2021 12:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:47:04.074Z: Cleaning up.
    Oct 26, 2021 12:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:47:04.153Z: Stopping worker pool...
    Oct 26, 2021 12:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:49:29.906Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 26, 2021 12:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T12:49:29.944Z: Worker pool stopped.
    Oct 26, 2021 12:49:36 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-26_05_45_04-12110504230520579641 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 223e23a8-5da9-49c9-96c9-30c691bad098 and timestamp: 2021-10-26T12:49:36.359000000Z:
                     Metric:                    Value:
                   read_time                     7.315
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 26, 2021 12:49:36 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 49.525 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 18s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/omvofqq6jbqdq

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2591

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2591/display/redirect>

Changes:


------------------------------------------
[...truncated 341.86 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ab2087c21b1e2b8cf9f9a8656aead894
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 26, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 26, 2021 6:45:08 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 26, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 26, 2021 6:45:11 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 6:45:11 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 26, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 26, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 26, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 26, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 26, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 26, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 26, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@998764102]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 26, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 26, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 26, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 26, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 26, 2021 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 26, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 26, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-G61PilMdlDdF-JxG3AftmHgpano1A2AXY4yoE7Tp0qM.jar
    Oct 26, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4531258299563612406.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-CwYI0P8cmgGqZfkdFPlfqEAKHfknuI4QklhixXRLHM8.jar
    Oct 26, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 26, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 26, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash cf8f18f4b20c815acad4a50152d9a8fd23593ca1985bdaec29729247d37e09cd> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-z48Y9LIMgVrK1KUBUtmo_SNZPKGYW9rsKXKSR9N-Cc0.pb
    Oct 26, 2021 6:45:20 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 26, 2021 6:45:20 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 26, 2021 6:45:20 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 26, 2021 6:45:20 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 26, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-25_23_45_20-16445457230774478233?project=apache-beam-testing
    Oct 26, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-25_23_45_20-16445457230774478233
    Oct 26, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-25_23_45_20-16445457230774478233
    Oct 26, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-26T06:45:24.783Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 26, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:45:33.539Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 26, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:45:34.223Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 26, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:45:34.260Z: Expanding GroupByKey operations into optimizable parts.
    Oct 26, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:45:34.290Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 26, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:45:34.347Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 26, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:45:34.376Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 26, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:45:34.408Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 26, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:45:34.745Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 26, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:45:34.822Z: Starting 5 workers in us-central1-a...
    Oct 26, 2021 6:46:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:46:02.618Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 26, 2021 6:46:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:46:20.189Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 26, 2021 6:46:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:46:46.364Z: Workers have started successfully.
    Oct 26, 2021 6:46:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:46:46.386Z: Workers have started successfully.
    Oct 26, 2021 6:47:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:47:12.785Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 26, 2021 6:47:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:47:12.921Z: Cleaning up.
    Oct 26, 2021 6:47:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:47:13.018Z: Stopping worker pool...
    Oct 26, 2021 6:49:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:49:45.107Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 26, 2021 6:49:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T06:49:45.143Z: Worker pool stopped.
    Oct 26, 2021 6:49:51 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-25_23_45_20-16445457230774478233 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4f5d7a15-90e5-4d9a-bf06-68c882807e58 and timestamp: 2021-10-26T06:49:51.269000000Z:
                     Metric:                    Value:
                   read_time                     8.179
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 26, 2021 6:49:51 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 47.408 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 33s
152 actionable tasks: 100 executed, 52 from cache

Publishing build scan...
https://gradle.com/s/pp7uvuncuz4h6

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2590

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2590/display/redirect?page=changes>

Changes:

[Luke Cwik] [BEAM-12522] Enable side inputs on all splittable DoFn execution time

[noreply] Merge pull request #15783 from [BEAM-13048] [Playground] Add shortcuts

[Luke Cwik] [BEAM-8543] Fix test filtering for Dataflow Runner V2 to exclude


------------------------------------------
[...truncated 341.95 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ab2087c21b1e2b8cf9f9a8656aead894
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 26, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 26, 2021 12:45:18 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 26, 2021 12:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 26, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 26, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 26, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 26, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 26, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 26, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 26, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 26, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 26, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 26, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 26, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 26, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 26, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 26, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 26, 2021 12:45:24 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 26, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 26, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-G61PilMdlDdF-JxG3AftmHgpano1A2AXY4yoE7Tp0qM.jar
    Oct 26, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6884979175832345599.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-vwvqpYLE-nloHPrqT09fVU1QXg8SNxyDQ8RqB4lQCgc.jar
    Oct 26, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 26, 2021 12:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 26, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 0c8c4e1b924817efd5b4a031af9a4711c8626b1e36a1338edc79e48c8a2af012> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-DIxOG5JIF-_VtKAxr5pHEchiax42oTOO3HnkjIoq8BI.pb
    Oct 26, 2021 12:45:31 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 26, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 26, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 26, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 26, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-25_17_45_32-14179158492439714256?project=apache-beam-testing
    Oct 26, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-25_17_45_32-14179158492439714256
    Oct 26, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-25_17_45_32-14179158492439714256
    Oct 26, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-26T00:45:35.505Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 26, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:45:45.322Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 26, 2021 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:45:46.052Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 26, 2021 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:45:46.092Z: Expanding GroupByKey operations into optimizable parts.
    Oct 26, 2021 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:45:46.118Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 26, 2021 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:45:46.206Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 26, 2021 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:45:46.232Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 26, 2021 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:45:46.259Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 26, 2021 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:45:46.616Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 26, 2021 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:45:46.671Z: Starting 5 workers in us-central1-a...
    Oct 26, 2021 12:46:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:46:10.065Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 26, 2021 12:46:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:46:30.068Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 26, 2021 12:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:46:54.870Z: Workers have started successfully.
    Oct 26, 2021 12:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:46:54.903Z: Workers have started successfully.
    Oct 26, 2021 12:47:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:47:23.907Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 26, 2021 12:47:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:47:24.074Z: Cleaning up.
    Oct 26, 2021 12:47:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:47:24.157Z: Stopping worker pool...
    Oct 26, 2021 12:49:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:49:48.338Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 26, 2021 12:49:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-26T00:49:48.375Z: Worker pool stopped.
    Oct 26, 2021 12:49:53 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-25_17_45_32-14179158492439714256 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): de667016-1e8a-4b6e-ba7d-f9af7602e370 and timestamp: 2021-10-26T00:49:53.529000000Z:
                     Metric:                    Value:
                   read_time                     8.328
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 26, 2021 12:49:53 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 40.531 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 31s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/pqpzuexkifwlw

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2589

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2589/display/redirect?page=changes>

Changes:

[aydar.zaynutdinov] [BEAM-13062][Playground]

[ilya.kozyrev] Add environment_service.go and structures for beam sdk, network envs,

[aydar.zaynutdinov] [BEAM-13062][Playground]

[noreply] Merge pull request #15772 from [BEAM-13032] [Playground] Implement the

[noreply] Merge pull request #15714 from [BEAM-13005] [Playground] Implement local

[noreply] Merge pull request #15770 from [BEAM-13095][Playground] Using working

[noreply] [BEAM-11758] Update basics page: Aggregation, Runner, UDF, Schema


------------------------------------------
[...truncated 338.34 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is b1c7c34038d3084017a9c5ad5e385269
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 25, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 25, 2021 6:44:56 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 25, 2021 6:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 25, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 25, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@311251360]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 25, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 25, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 25, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 25, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 25, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 25, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 25, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1401828045232600418.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-77zOSxMWEXP8Jnii8Hti27KAyEopOUys2QswanR0rLc.jar
    Oct 25, 2021 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 25, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 25, 2021 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash b89b701b0d2a46bfd25cbb0ccd4998f85fdc9bbde3fe0053ba972ed7e382a693> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-uJtwGw0qRr_SXLsMzUmY-F_cm73j_gBTupcu1-OCppM.pb
    Oct 25, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 25, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 25, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 25, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 25, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-25_11_45_09-11740430234319184322?project=apache-beam-testing
    Oct 25, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-25_11_45_09-11740430234319184322
    Oct 25, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-25_11_45_09-11740430234319184322
    Oct 25, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-25T18:45:12.380Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 25, 2021 6:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:45:41.050Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 25, 2021 6:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:45:41.899Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 25, 2021 6:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:45:41.925Z: Expanding GroupByKey operations into optimizable parts.
    Oct 25, 2021 6:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:45:41.982Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 25, 2021 6:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:45:42.038Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 25, 2021 6:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:45:42.071Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 25, 2021 6:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:45:42.105Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 25, 2021 6:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:45:42.399Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 25, 2021 6:45:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:45:42.459Z: Starting 5 workers in us-central1-a...
    Oct 25, 2021 6:46:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:46:00.171Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 25, 2021 6:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:46:31.506Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 25, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:46:56.283Z: Workers have started successfully.
    Oct 25, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:46:56.311Z: Workers have started successfully.
    Oct 25, 2021 6:47:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:47:25.183Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 25, 2021 6:47:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:47:25.311Z: Cleaning up.
    Oct 25, 2021 6:47:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:47:25.368Z: Stopping worker pool...
    Oct 25, 2021 6:50:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:50:00.808Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 25, 2021 6:50:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T18:50:00.851Z: Worker pool stopped.
    Oct 25, 2021 6:50:08 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-25_11_45_09-11740430234319184322 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 1a0b850a-389d-445d-b433-46fb67348710 and timestamp: 2021-10-25T18:50:08.662000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.789

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 25, 2021 6:50:09 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.085 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.083 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 5 mins 18.164 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 49s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/gn6ezsourdbjk

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2588

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2588/display/redirect>

Changes:


------------------------------------------
[...truncated 338.59 KB...]
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is b1c7c34038d3084017a9c5ad5e385269
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 25, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 25, 2021 12:45:07 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 25, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 25, 2021 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 12:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 25, 2021 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 12:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 12:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 25, 2021 12:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1658417926]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 12:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 25, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 25, 2021 12:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 25, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 25, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 25, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 25, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4749405735551215320.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-udTe-prq1QvhM3PQ7SYUFamjEZblg_QiQQadPqcK1mk.jar
    Oct 25, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 25, 2021 12:45:19 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 25, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash ebee6c9f08ece616e8d91d481ccb11239b5897ddb83e34d0b5c8a84f3ea7cd0e> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-6-5snwjs5hbo2R1IHMsRI5tYl924PjTQtcioTz6nzQ4.pb
    Oct 25, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 25, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 25, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 25, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 25, 2021 12:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-25_05_45_21-3164346719354293428?project=apache-beam-testing
    Oct 25, 2021 12:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-25_05_45_21-3164346719354293428
    Oct 25, 2021 12:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-25_05_45_21-3164346719354293428
    Oct 25, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-25T12:45:25.220Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 25, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:45:32.669Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 25, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:45:33.496Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 25, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:45:33.541Z: Expanding GroupByKey operations into optimizable parts.
    Oct 25, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:45:33.579Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 25, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:45:33.656Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 25, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:45:33.690Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 25, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:45:33.724Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 25, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:45:34.102Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 25, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:45:34.181Z: Starting 5 workers in us-central1-c...
    Oct 25, 2021 12:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:46:04.004Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 25, 2021 12:46:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:46:14.161Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 25, 2021 12:46:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:46:39.730Z: Workers have started successfully.
    Oct 25, 2021 12:46:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:46:39.751Z: Workers have started successfully.
    Oct 25, 2021 12:47:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:47:08.787Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 25, 2021 12:47:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:47:08.931Z: Cleaning up.
    Oct 25, 2021 12:47:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:47:09.025Z: Stopping worker pool...
    Oct 25, 2021 12:49:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:49:32.415Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 25, 2021 12:49:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T12:49:32.462Z: Worker pool stopped.
    Oct 25, 2021 12:49:42 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-25_05_45_21-3164346719354293428 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): ce2c8767-afd2-4708-b36e-a06e5fcb9d0c and timestamp: 2021-10-25T12:49:42.320000000Z:
                     Metric:                    Value:
                   read_time                     8.531
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 25, 2021 12:49:42 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 40.562 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 22s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/s47txd5eoqqtk

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2587

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2587/display/redirect?page=changes>

Changes:

[noreply] Fixing BigQueryIO request too big corner case for streaming inserts


------------------------------------------
[...truncated 342.70 KB...]
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 25, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 25, 2021 6:45:09 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 25, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 25, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 6:45:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 25, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 25, 2021 6:45:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@311251360]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 6:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 25, 2021 6:45:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 25, 2021 6:45:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 25, 2021 6:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 25, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 25, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 25, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6462084005405693165.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-KEBooeD5rYr-VXEXmytgYWlbKmkfk_nE2PlhT7MP5v8.jar
    Oct 25, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.thrift/libthrift/0.14.1/85348a0c44c298bbec5ae747e67ae12e60b3aef6/libthrift-0.14.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/libthrift-0.14.1-WzUQ_nLm8HJeKc7269seq6zMxp15_E7VC2gWAKh2Z-w.jar
    Oct 25, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/8.5.46/5d686394334d143f48251827435ab086a161e75e/tomcat-embed-core-8.5.46.jar to gs://temp-storage-for-perf-tests/loadtests/staging/tomcat-embed-core-8.5.46-vl-FREjS7l1uADb-srT3ExYweaG2uaepdQjlWRetNcI.jar
    Oct 25, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.tomcat/tomcat-annotations-api/8.5.46/56c67699de192c603afd6f029e80e5ff8d98e7e9/tomcat-annotations-api-8.5.46.jar to gs://temp-storage-for-perf-tests/loadtests/staging/tomcat-annotations-api-8.5.46-amtG0OaVhkRRTAyjZYs7B-YSOmgqIO4203lSQnNfq8M.jar
    Oct 25, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 246 files cached, 4 files newly uploaded in 0 seconds
    Oct 25, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 25, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 5ac93a787833e8fd9b9274c677e2148489e2b601ec4c6f3c7d18f83f4a1e1325> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Wsk6eHgz6P2bknTGd-IUhInitgHsTG88fRj4P0oeEyU.pb
    Oct 25, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 25, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 25, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 25, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 25, 2021 6:45:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-24_23_45_21-3312190335979126018?project=apache-beam-testing
    Oct 25, 2021 6:45:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-24_23_45_21-3312190335979126018
    Oct 25, 2021 6:45:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-24_23_45_21-3312190335979126018
    Oct 25, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-25T06:45:24.558Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 25, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:45:30.887Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 25, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:45:31.671Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 25, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:45:31.709Z: Expanding GroupByKey operations into optimizable parts.
    Oct 25, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:45:31.735Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 25, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:45:31.810Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 25, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:45:31.843Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 25, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:45:31.867Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 25, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:45:32.246Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 25, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:45:32.321Z: Starting 5 workers in us-central1-a...
    Oct 25, 2021 6:45:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:45:54.437Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 25, 2021 6:46:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:46:11.304Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 25, 2021 6:46:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:46:11.339Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 25, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:46:21.608Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 25, 2021 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:46:47.181Z: Workers have started successfully.
    Oct 25, 2021 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:46:47.209Z: Workers have started successfully.
    Oct 25, 2021 6:47:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:47:12.876Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 25, 2021 6:47:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:47:13.048Z: Cleaning up.
    Oct 25, 2021 6:47:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:47:13.139Z: Stopping worker pool...
    Oct 25, 2021 6:49:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:49:33.636Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 25, 2021 6:49:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T06:49:33.671Z: Worker pool stopped.
    Oct 25, 2021 6:49:39 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-24_23_45_21-3312190335979126018 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 97be82fe-6fc4-4361-8b87-c083f4e9da9e and timestamp: 2021-10-25T06:49:39.273000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                      7.18

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 25, 2021 6:49:39 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 35.046 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 21s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/w7kyn4zosmpgs

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2586

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2586/display/redirect>

Changes:


------------------------------------------
[...truncated 338.40 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 82c90d2c6237cfa28e1ead7e4be2fe4a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 25, 2021 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 25, 2021 12:44:53 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 25, 2021 12:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 25, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 25, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 25, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@998764102]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 25, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 25, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 25, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 25, 2021 12:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 25, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 25, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 25, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2595792217425081203.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Svu3U0YG4soNQ-jJoQ7YB5zJckxCzgPiwdj6X9eyS3s.jar
    Oct 25, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 25, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 25, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 1c18c0992bea98f286e19255f784d1ea7a16a2d5bc10e8125d5a6694a20a4c0a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-HBjAmSvqmPKG4ZJV94TR6noWotW8EOgSXVpmlKIKTAo.pb
    Oct 25, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 25, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 25, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 25, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 25, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-24_17_45_05-9734064161454404204?project=apache-beam-testing
    Oct 25, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-24_17_45_05-9734064161454404204
    Oct 25, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-24_17_45_05-9734064161454404204
    Oct 25, 2021 12:45:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-25T00:45:08.185Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 25, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:45:14.793Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 25, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:45:15.446Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 25, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:45:15.474Z: Expanding GroupByKey operations into optimizable parts.
    Oct 25, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:45:15.508Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 25, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:45:15.589Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 25, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:45:15.617Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 25, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:45:15.650Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 25, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:45:15.928Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 25, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:45:15.986Z: Starting 5 workers in us-central1-a...
    Oct 25, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:45:28.630Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 25, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:45:51.127Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Oct 25, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:45:51.168Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Oct 25, 2021 12:46:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:46:01.393Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 25, 2021 12:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:46:26.125Z: Workers have started successfully.
    Oct 25, 2021 12:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:46:26.150Z: Workers have started successfully.
    Oct 25, 2021 12:46:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:46:53.588Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 25, 2021 12:46:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:46:53.748Z: Cleaning up.
    Oct 25, 2021 12:46:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:46:53.836Z: Stopping worker pool...
    Oct 25, 2021 12:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:49:21.277Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 25, 2021 12:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-25T00:49:21.318Z: Worker pool stopped.
    Oct 25, 2021 12:49:26 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-24_17_45_05-9734064161454404204 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 600a77f3-8d89-4571-bb36-20a416520534 and timestamp: 2021-10-25T00:49:26.572000000Z:
                     Metric:                    Value:
                   read_time                     6.658
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 25, 2021 12:49:26 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 38.103 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 7s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/mfpxpljom2t3g

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2585

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2585/display/redirect>

Changes:


------------------------------------------
[...truncated 338.46 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 82c90d2c6237cfa28e1ead7e4be2fe4a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 24, 2021 6:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 24, 2021 6:44:51 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 24, 2021 6:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 24, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 24, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 24, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 24, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 24, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 24, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 24, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 24, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@311251360]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 24, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 24, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 24, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 24, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 24, 2021 6:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 24, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 24, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 24, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2281831264933747915.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-BFpAUBKkNYZEkFjs7iKxv0Rjtt-SZ_ChDCD2muD7Gbc.jar
    Oct 24, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 24, 2021 6:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 24, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104891 bytes, hash 46658f6a160e60bde29ee79ef6ba17502692c31d5eab264279671f06a43e72ec> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-RmWPahYOYL3inuee9roXUCaSwx1eqyZCeWcfBqQ-cuw.pb
    Oct 24, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 24, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 24, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 24, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 24, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-24_11_45_03-14601734037435446?project=apache-beam-testing
    Oct 24, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-24_11_45_03-14601734037435446
    Oct 24, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-24_11_45_03-14601734037435446
    Oct 24, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-24T18:45:06.736Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 24, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:45:12.335Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 24, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:45:13.106Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 24, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:45:13.144Z: Expanding GroupByKey operations into optimizable parts.
    Oct 24, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:45:13.161Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 24, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:45:13.221Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 24, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:45:13.249Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 24, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:45:13.281Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 24, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:45:13.592Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 24, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:45:13.670Z: Starting 5 workers in us-central1-a...
    Oct 24, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:45:45.511Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 24, 2021 6:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:45:58.191Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 24, 2021 6:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:46:27.415Z: Workers have started successfully.
    Oct 24, 2021 6:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:46:27.448Z: Workers have started successfully.
    Oct 24, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:46:57.385Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 24, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:46:57.528Z: Cleaning up.
    Oct 24, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:46:57.596Z: Stopping worker pool...
    Oct 24, 2021 6:49:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:49:22.080Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 24, 2021 6:49:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T18:49:22.142Z: Worker pool stopped.
    Oct 24, 2021 6:49:27 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-24_11_45_03-14601734037435446 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 72e4438f-eb5b-436f-9755-825bad0fccdc and timestamp: 2021-10-24T18:49:27.274000000Z:
                     Metric:                    Value:
                   read_time                     8.245
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 24, 2021 6:49:27 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 40.39 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 9s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ojtl77yvwijri

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2584

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2584/display/redirect>

Changes:


------------------------------------------
[...truncated 338.34 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 82c90d2c6237cfa28e1ead7e4be2fe4a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 24, 2021 12:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 24, 2021 12:44:58 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 24, 2021 12:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 24, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 24, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 24, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 24, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 24, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1658417926]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 24, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 24, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 24, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 24, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 24, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3220223612139521038.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-SxyqRG-T0_8658xNXHO5x0m0-2Y2aE5IySXuTWRZd1U.jar
    Oct 24, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 24, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 24, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 5c74b9531ad0d39b7fba4e188553428969bcd09b06af65e77b9677878684d51c> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-XHS5UxrQ05t_uk4YhVNCiWm80JsGr2Xne5Z3h4aE1Rw.pb
    Oct 24, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 24, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 24, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 24, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 24, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-24_05_45_11-198687276119188551?project=apache-beam-testing
    Oct 24, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-24_05_45_11-198687276119188551
    Oct 24, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-24_05_45_11-198687276119188551
    Oct 24, 2021 12:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-24T12:45:14.601Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 24, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:45:19.011Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 24, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:45:19.807Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 24, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:45:19.846Z: Expanding GroupByKey operations into optimizable parts.
    Oct 24, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:45:19.862Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 24, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:45:19.933Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 24, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:45:19.953Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 24, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:45:19.987Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 24, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:45:20.307Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 24, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:45:20.367Z: Starting 5 workers in us-central1-a...
    Oct 24, 2021 12:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:45:37.590Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 24, 2021 12:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:46:04.308Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 24, 2021 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:46:31.912Z: Workers have started successfully.
    Oct 24, 2021 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:46:31.931Z: Workers have started successfully.
    Oct 24, 2021 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:47:00.477Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 24, 2021 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:47:00.626Z: Cleaning up.
    Oct 24, 2021 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:47:00.698Z: Stopping worker pool...
    Oct 24, 2021 12:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:49:20.814Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 24, 2021 12:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T12:49:20.876Z: Worker pool stopped.
    Oct 24, 2021 12:49:25 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-24_05_45_11-198687276119188551 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 0ea1fa4c-570f-4d4e-b2db-9bbe7caea725 and timestamp: 2021-10-24T12:49:25.876000000Z:
                     Metric:                    Value:
                   read_time                     7.363
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 24, 2021 12:49:26 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 32.174 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 5s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/52tcykkweyrcq

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2583

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2583/display/redirect>

Changes:


------------------------------------------
[...truncated 336.90 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 82c90d2c6237cfa28e1ead7e4be2fe4a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 24, 2021 6:44:50 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 24, 2021 6:44:51 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 24, 2021 6:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 24, 2021 6:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 6:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 24, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 24, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 24, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 24, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@998764102]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 24, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 24, 2021 6:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 24, 2021 6:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 24, 2021 6:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 24, 2021 6:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test986201900298641337.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-OV5g-cowxO6zfmtIngBxzK3mHZp1HkEiHVK-HHn-m6g.jar
    Oct 24, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 24, 2021 6:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 24, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash a7e8d583e26229f5c31f178f1bfd3ad9b49b7da85d7b4842a8eaea89cf669895> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-p-jVg-JiKfXDHxePG_062bSbfahde0hCqOrqic9mmJU.pb
    Oct 24, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 24, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 24, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 24, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 24, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-23_23_45_02-13720345690466915280?project=apache-beam-testing
    Oct 24, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-23_23_45_02-13720345690466915280
    Oct 24, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-23_23_45_02-13720345690466915280
    Oct 24, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-24T06:45:06.087Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 24, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:45:11.007Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 24, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:45:11.851Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 24, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:45:11.912Z: Expanding GroupByKey operations into optimizable parts.
    Oct 24, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:45:11.945Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 24, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:45:12.004Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 24, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:45:12.030Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 24, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:45:12.054Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 24, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:45:12.370Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 24, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:45:12.433Z: Starting 5 workers in us-central1-a...
    Oct 24, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:45:23.092Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 24, 2021 6:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:45:58.694Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 24, 2021 6:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:46:25.353Z: Workers have started successfully.
    Oct 24, 2021 6:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:46:25.381Z: Workers have started successfully.
    Oct 24, 2021 6:46:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:46:52.979Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 24, 2021 6:46:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:46:53.109Z: Cleaning up.
    Oct 24, 2021 6:46:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:46:53.175Z: Stopping worker pool...
    Oct 24, 2021 6:49:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:49:07.374Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 24, 2021 6:49:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T06:49:07.415Z: Worker pool stopped.
    Oct 24, 2021 6:49:15 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-23_23_45_02-13720345690466915280 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f4e5badd-2a26-4279-ac69-b2de80cd8a86 and timestamp: 2021-10-24T06:49:15.304000000Z:
                     Metric:                    Value:
                   read_time                     8.898
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 24, 2021 6:49:15 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 28.931 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 4m 58s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/t7eiql7zcystu

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2582

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2582/display/redirect>

Changes:


------------------------------------------
[...truncated 338.52 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 82c90d2c6237cfa28e1ead7e4be2fe4a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 24, 2021 12:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 24, 2021 12:44:53 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 24, 2021 12:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 24, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 24, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 24, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 24, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 24, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 24, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 24, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 24, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@998764102]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 24, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 24, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 24, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 24, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 24, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 24, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 24, 2021 12:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 24, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 24, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 24, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5620400807474782905.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-PvVfJH29nzZezX6DdAD2fei6EHEtmTyWx_r0EaBp4Go.jar
    Oct 24, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 24, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 24, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104889 bytes, hash eb265cd519ab6482788bd1b185f3d113b78efd4154495af0b3e70f392733a1fb> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-6yZc1RmrZIJ4i9GxhfPRE7eO_UFUSVrws-cPOSczofs.pb
    Oct 24, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 24, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 24, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 24, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 24, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-23_17_45_05-11425394296655645248?project=apache-beam-testing
    Oct 24, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-23_17_45_05-11425394296655645248
    Oct 24, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-23_17_45_05-11425394296655645248
    Oct 24, 2021 12:45:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-24T00:45:09.077Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 24, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:45:16.459Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 24, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:45:17.136Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 24, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:45:17.174Z: Expanding GroupByKey operations into optimizable parts.
    Oct 24, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:45:17.200Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 24, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:45:17.272Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 24, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:45:17.290Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 24, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:45:17.338Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 24, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:45:17.663Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 24, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:45:17.745Z: Starting 5 workers in us-central1-c...
    Oct 24, 2021 12:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:45:42.088Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 24, 2021 12:46:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:46:02.194Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 24, 2021 12:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:46:27.494Z: Workers have started successfully.
    Oct 24, 2021 12:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:46:27.517Z: Workers have started successfully.
    Oct 24, 2021 12:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:46:56.310Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 24, 2021 12:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:46:56.473Z: Cleaning up.
    Oct 24, 2021 12:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:46:56.538Z: Stopping worker pool...
    Oct 24, 2021 12:49:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:49:19.776Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 24, 2021 12:49:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-24T00:49:19.816Z: Worker pool stopped.
    Oct 24, 2021 12:49:25 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-23_17_45_05-11425394296655645248 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 9b86f15a-194a-423a-ad72-a17e2abd1bfc and timestamp: 2021-10-24T00:49:25.589000000Z:
                     Metric:                    Value:
                   read_time                     7.565
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 24, 2021 12:49:26 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 36.703 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 6s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/5tctckamenm2y

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2581

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2581/display/redirect>

Changes:


------------------------------------------
[...truncated 338.44 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 82c90d2c6237cfa28e1ead7e4be2fe4a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 23, 2021 6:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 23, 2021 6:44:50 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 23, 2021 6:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 23, 2021 6:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 6:44:53 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 23, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 23, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@998764102]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 23, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 23, 2021 6:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 23, 2021 6:44:59 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 23, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 23, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test916840746478523694.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-G8FrI5xw5d-pHtGb7JunTIPm5peWp0lb7KMauDTCdJQ.jar
    Oct 23, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 23, 2021 6:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 23, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104892 bytes, hash 3f0c787d54a4d91ba4485b8b3cd5972d51a4a906dc520d9bd65ab29339b06bcc> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Pwx4fVSk2RukSFuLPNWXLVGkqQbcUg2b1lqykzmwa8w.pb
    Oct 23, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 23, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 23, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 23, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 23, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-23_11_45_03-11784254790895893403?project=apache-beam-testing
    Oct 23, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-23_11_45_03-11784254790895893403
    Oct 23, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-23_11_45_03-11784254790895893403
    Oct 23, 2021 6:45:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-23T18:45:06.342Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 23, 2021 6:45:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:45:11.307Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 23, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:45:12.250Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 23, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:45:12.380Z: Expanding GroupByKey operations into optimizable parts.
    Oct 23, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:45:12.440Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 23, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:45:12.499Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 23, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:45:12.526Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 23, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:45:12.569Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 23, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:45:12.940Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 23, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:45:13.013Z: Starting 5 workers in us-central1-a...
    Oct 23, 2021 6:45:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:45:30.945Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 23, 2021 6:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:45:59.011Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 23, 2021 6:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:46:23.398Z: Workers have started successfully.
    Oct 23, 2021 6:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:46:23.431Z: Workers have started successfully.
    Oct 23, 2021 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:46:51.319Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 23, 2021 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:46:51.454Z: Cleaning up.
    Oct 23, 2021 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:46:51.528Z: Stopping worker pool...
    Oct 23, 2021 6:49:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:49:27.362Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 23, 2021 6:49:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T18:49:27.405Z: Worker pool stopped.
    Oct 23, 2021 6:49:33 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-23_11_45_03-11784254790895893403 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 659d2668-a9fb-4554-9f4a-4cf586c89f8c and timestamp: 2021-10-23T18:49:33.337000000Z:
                     Metric:                    Value:
                   read_time                     7.448
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 23, 2021 6:49:33 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 47.484 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 16s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/taemuhzyuhkx2

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2580

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2580/display/redirect>

Changes:


------------------------------------------
[...truncated 337.45 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 82c90d2c6237cfa28e1ead7e4be2fe4a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 23, 2021 12:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 23, 2021 12:44:51 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 23, 2021 12:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 23, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 23, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 23, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 23, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 23, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 23, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 23, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 23, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@998764102]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 23, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 23, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 23, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 23, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 23, 2021 12:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 23, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 23, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 23, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4153187230473989759.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-aoauSMOjrgz76uA0qyPTW3eGuShWn-6wamd8LclOpBs.jar
    Oct 23, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 23, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 23, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash a1b414900d27cf86a14755f0bd70bf55819b25d4c7909d375e8d335496246469> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-obQUkA0nz4ahR1XwvXC_VYGbJdTHkJ03Xo0zVJYkZGk.pb
    Oct 23, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 23, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 23, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 23, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 23, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-23_05_45_05-4998188572001341519?project=apache-beam-testing
    Oct 23, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-23_05_45_05-4998188572001341519
    Oct 23, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-23_05_45_05-4998188572001341519
    Oct 23, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-23T12:45:10.596Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 23, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:45:18.382Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 23, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:45:19.164Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 23, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:45:19.203Z: Expanding GroupByKey operations into optimizable parts.
    Oct 23, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:45:19.230Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 23, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:45:19.296Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 23, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:45:19.323Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 23, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:45:19.361Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 23, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:45:19.685Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 23, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:45:19.763Z: Starting 5 workers in us-central1-c...
    Oct 23, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:45:26.559Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 23, 2021 12:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:45:58.534Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 23, 2021 12:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:46:23.731Z: Workers have started successfully.
    Oct 23, 2021 12:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:46:23.765Z: Workers have started successfully.
    Oct 23, 2021 12:46:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:46:53.507Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 23, 2021 12:46:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:46:53.802Z: Cleaning up.
    Oct 23, 2021 12:46:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:46:53.953Z: Stopping worker pool...
    Oct 23, 2021 12:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:49:16.570Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 23, 2021 12:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T12:49:16.614Z: Worker pool stopped.
    Oct 23, 2021 12:49:23 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-23_05_45_05-4998188572001341519 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 573b20a4-b8f5-4b0b-bacd-3f87a72c7d3f and timestamp: 2021-10-23T12:49:23.171000000Z:
                     Metric:                    Value:
                   read_time                      9.04
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 23, 2021 12:49:23 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 36.406 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 5s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/uoybre6u24wru

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2579

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2579/display/redirect>

Changes:


------------------------------------------
[...truncated 339.39 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 82c90d2c6237cfa28e1ead7e4be2fe4a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 23, 2021 6:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 23, 2021 6:44:54 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 23, 2021 6:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 23, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 23, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 23, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 23, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 23, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 23, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 23, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 23, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@998764102]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 23, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 23, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 23, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 23, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 23, 2021 6:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 23, 2021 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 23, 2021 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 23, 2021 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1581369464775801829.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-M02kErQJir48ULTVIVE-RVq-luRtbY5wAFMczOrUSrc.jar
    Oct 23, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 23, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 23, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 0a0e2ff3f2cf81ec8d87d703247efad65dde481d4bcddc4882fdeacb31fe696b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Cg4v8_LPgeyNh9cDJH761l3eSB1LzdxIgv3qyzH-aWs.pb
    Oct 23, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 23, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 23, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 23, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 23, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-22_23_45_07-6553272891410165921?project=apache-beam-testing
    Oct 23, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-22_23_45_07-6553272891410165921
    Oct 23, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-22_23_45_07-6553272891410165921
    Oct 23, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-23T06:45:10.401Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 23, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:45:16.222Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 23, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:45:16.999Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 23, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:45:17.026Z: Expanding GroupByKey operations into optimizable parts.
    Oct 23, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:45:17.068Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 23, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:45:17.138Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 23, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:45:17.164Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 23, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:45:17.197Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 23, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:45:17.526Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 23, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:45:17.581Z: Starting 5 workers in us-central1-c...
    Oct 23, 2021 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:45:39.982Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 23, 2021 6:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:45:57.381Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 23, 2021 6:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:46:22.025Z: Workers have started successfully.
    Oct 23, 2021 6:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:46:22.055Z: Workers have started successfully.
    Oct 23, 2021 6:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:46:59.111Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 23, 2021 6:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:46:59.271Z: Cleaning up.
    Oct 23, 2021 6:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:46:59.350Z: Stopping worker pool...
    Oct 23, 2021 6:49:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:49:10.457Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 23, 2021 6:49:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T06:49:10.497Z: Worker pool stopped.
    Oct 23, 2021 6:49:18 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-22_23_45_07-6553272891410165921 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): eec3c578-b3c4-4a82-bf4d-b995c9b0a757 and timestamp: 2021-10-23T06:49:18.200000000Z:
                     Metric:                    Value:
                   read_time                    14.831
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 23, 2021 6:49:18 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 28.471 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 4m 59s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/xh4teytgmnutu

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2578

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2578/display/redirect>

Changes:


------------------------------------------
[...truncated 347.17 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.
Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 82c90d2c6237cfa28e1ead7e4be2fe4a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 23, 2021 12:45:22 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 23, 2021 12:45:23 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 23, 2021 12:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 23, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 12:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 12:45:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 23, 2021 12:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 12:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 12:45:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 23, 2021 12:45:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 23, 2021 12:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 23, 2021 12:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 23, 2021 12:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 23, 2021 12:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 23, 2021 12:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 23, 2021 12:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 23, 2021 12:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8172644966567795379.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-5SKfHA7pbWfH_cncwRkL6M8c20m2p4DP1fGE9JcmAzg.jar
    Oct 23, 2021 12:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 23, 2021 12:45:34 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 23, 2021 12:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash d92f24c97d541dcabcf2cc743fad5d70150cf6cb31f6b09a906823670a00d766> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-2S8kyX1UHcq88sx0P61dcBUM9ssx9rCakGgjZwoA12Y.pb
    Oct 23, 2021 12:45:36 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 23, 2021 12:45:36 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 23, 2021 12:45:36 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 23, 2021 12:45:36 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 23, 2021 12:45:37 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-22_17_45_37-15579832384745414300?project=apache-beam-testing
    Oct 23, 2021 12:45:37 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-22_17_45_37-15579832384745414300
    Oct 23, 2021 12:45:37 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-22_17_45_37-15579832384745414300
    Oct 23, 2021 12:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-23T00:45:40.148Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 23, 2021 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:45:47.695Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 23, 2021 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:45:48.359Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 23, 2021 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:45:48.401Z: Expanding GroupByKey operations into optimizable parts.
    Oct 23, 2021 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:45:48.430Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 23, 2021 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:45:48.505Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 23, 2021 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:45:48.529Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 23, 2021 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:45:48.550Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 23, 2021 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:45:48.857Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 23, 2021 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:45:48.938Z: Starting 5 workers in us-central1-c...
    Oct 23, 2021 12:46:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:46:16.620Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 23, 2021 12:46:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:46:29.690Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 23, 2021 12:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:46:55.074Z: Workers have started successfully.
    Oct 23, 2021 12:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:46:55.177Z: Workers have started successfully.
    Oct 23, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:47:26.595Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 23, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:47:26.743Z: Cleaning up.
    Oct 23, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:47:26.810Z: Stopping worker pool...
    Oct 23, 2021 12:49:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:49:52.198Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 23, 2021 12:49:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-23T00:49:52.253Z: Worker pool stopped.
    Oct 23, 2021 12:49:57 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-22_17_45_37-15579832384745414300 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 98e29aa3-13bd-47cb-a8ca-0fced4b93c6a and timestamp: 2021-10-23T00:49:57.976000000Z:
                     Metric:                    Value:
                   read_time                     9.382
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 23, 2021 12:49:59 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 40.931 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 38s
152 actionable tasks: 102 executed, 50 from cache

Publishing build scan...
https://gradle.com/s/7kndjw47yqcam

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2577

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2577/display/redirect>

Changes:


------------------------------------------
[...truncated 337.49 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 82c90d2c6237cfa28e1ead7e4be2fe4a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 22, 2021 6:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 22, 2021 6:44:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 22, 2021 6:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 22, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 22, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 22, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@998764102]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 22, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 22, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 22, 2021 6:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 22, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 22, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 22, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4812659937698288179.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-HK_O8Kfv2E-xp0m8VU2_UJcCq6Xu1FjNajhvssOahtY.jar
    Oct 22, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 22, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 22, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104892 bytes, hash b1881ce480f204564b834418301c9652b6b8285691b48247a1a5efd3fff0e5a1> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-sYgc5IDyBFZLg0QYMByWUra4KFaRtIJHoaXv0__w5aE.pb
    Oct 22, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 22, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 22, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 22, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 22, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-22_11_45_06-6418125828332159712?project=apache-beam-testing
    Oct 22, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-22_11_45_06-6418125828332159712
    Oct 22, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-22_11_45_06-6418125828332159712
    Oct 22, 2021 6:45:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-22T18:45:10.314Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 22, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:15.476Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 22, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:16.232Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 22, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:16.269Z: Expanding GroupByKey operations into optimizable parts.
    Oct 22, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:16.299Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 22, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:16.352Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 22, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:16.367Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 22, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:16.416Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 22, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:16.729Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 22, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:16.809Z: Starting 5 workers in us-central1-a...
    Oct 22, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:32.160Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 22, 2021 6:45:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:47.748Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 22, 2021 6:45:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:47.774Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 22, 2021 6:45:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:45:58.086Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 22, 2021 6:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:46:22.931Z: Workers have started successfully.
    Oct 22, 2021 6:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:46:22.954Z: Workers have started successfully.
    Oct 22, 2021 6:46:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:46:50.248Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 22, 2021 6:46:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:46:50.371Z: Cleaning up.
    Oct 22, 2021 6:46:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:46:50.450Z: Stopping worker pool...
    Oct 22, 2021 6:49:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:49:08.129Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 22, 2021 6:49:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T18:49:08.166Z: Worker pool stopped.
    Oct 22, 2021 6:49:15 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-22_11_45_06-6418125828332159712 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4c7fec99-b8ac-4e9e-a8a3-0c123523220f and timestamp: 2021-10-22T18:49:15.341000000Z:
                     Metric:                    Value:
                   read_time                      7.87
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 22, 2021 6:49:15 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 26.074 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 4m 56s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/kpieh6wjlnw4u

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2576

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2576/display/redirect>

Changes:


------------------------------------------
[...truncated 337.48 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 82c90d2c6237cfa28e1ead7e4be2fe4a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 22, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 22, 2021 12:44:56 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 22, 2021 12:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 22, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 22, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 22, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 22, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 22, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 22, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 22, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 22, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 22, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 22, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 22, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 22, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 22, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 22, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 22, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6513860883471970675.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-9wlE9F-L4j_o6_mGA0EzSy3tsPFuagq285rta2TZlUg.jar
    Oct 22, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 22, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 22, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 1e8ee19e98beceb002660fe15f08f010f3a28f9e31b126e08fcd569004082861> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Ho7hnpi-zrACZg_hXwjwEPOij54xsSbgj81WkAQIKGE.pb
    Oct 22, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 22, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 22, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 22, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 22, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-22_05_45_09-5025220313817736867?project=apache-beam-testing
    Oct 22, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-22_05_45_09-5025220313817736867
    Oct 22, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-22_05_45_09-5025220313817736867
    Oct 22, 2021 12:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-22T12:45:12.402Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 22, 2021 12:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:45:17.223Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 22, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:45:18.129Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 22, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:45:18.159Z: Expanding GroupByKey operations into optimizable parts.
    Oct 22, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:45:18.186Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 22, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:45:18.259Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 22, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:45:18.287Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 22, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:45:18.310Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 22, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:45:18.621Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 22, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:45:18.697Z: Starting 5 workers in us-central1-a...
    Oct 22, 2021 12:45:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:45:43.231Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 22, 2021 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:45:52.955Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 22, 2021 12:45:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:45:52.981Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 22, 2021 12:46:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:46:03.242Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 22, 2021 12:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:46:27.608Z: Workers have started successfully.
    Oct 22, 2021 12:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:46:27.646Z: Workers have started successfully.
    Oct 22, 2021 12:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:46:55.999Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 22, 2021 12:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:46:56.156Z: Cleaning up.
    Oct 22, 2021 12:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:46:56.230Z: Stopping worker pool...
    Oct 22, 2021 12:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:49:16.111Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 22, 2021 12:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T12:49:16.188Z: Worker pool stopped.
    Oct 22, 2021 12:49:22 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-22_05_45_09-5025220313817736867 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 85f023b6-3ca7-48b7-a29a-efabb9941fe6 and timestamp: 2021-10-22T12:49:22.923000000Z:
                     Metric:                    Value:
                   read_time                     8.509
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 22, 2021 12:49:23 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 31.596 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 3s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/siwynfk4pt3xm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2575

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2575/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13015] Create a multiplexer that sends Elements based upon


------------------------------------------
[...truncated 347.05 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.
Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 82c90d2c6237cfa28e1ead7e4be2fe4a
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 22, 2021 6:47:11 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 22, 2021 6:47:12 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 22, 2021 6:47:12 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 22, 2021 6:47:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 6:47:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 6:47:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 22, 2021 6:47:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 6:47:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 6:47:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 22, 2021 6:47:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 6:47:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 22, 2021 6:47:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 22, 2021 6:47:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 22, 2021 6:47:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 22, 2021 6:47:25 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 22, 2021 6:47:25 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-_oo3FilJCuCIdWVjde-r3rYBg98ZzqydwbrrDgqGG1o.jar
    Oct 22, 2021 6:47:25 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2704467873999806469.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-mKkyCuTrHG9wh9sVz5PggKRxtodra9D3IgEUIGZzVtM.jar
    Oct 22, 2021 6:47:25 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 22, 2021 6:47:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 22, 2021 6:47:25 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 34bc0c8c9f4388c52ab475e7766b4710c8aa3abd82c64b5a2a8332c0271f07eb> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-NLwMjJ9DiMUqtHXndmtHEMiqOr2CxktaKoMywCcfB-s.pb
    Oct 22, 2021 6:47:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 22, 2021 6:47:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 22, 2021 6:47:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 22, 2021 6:47:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 22, 2021 6:47:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-21_23_47_28-9679235479846533736?project=apache-beam-testing
    Oct 22, 2021 6:47:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-21_23_47_28-9679235479846533736
    Oct 22, 2021 6:47:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-21_23_47_28-9679235479846533736
    Oct 22, 2021 6:47:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-22T06:47:31.466Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 22, 2021 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:47:36.666Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 22, 2021 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:47:37.354Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 22, 2021 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:47:37.383Z: Expanding GroupByKey operations into optimizable parts.
    Oct 22, 2021 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:47:37.422Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 22, 2021 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:47:37.474Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 22, 2021 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:47:37.500Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 22, 2021 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:47:37.526Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 22, 2021 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:47:37.845Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 22, 2021 6:47:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:47:37.913Z: Starting 5 workers in us-central1-a...
    Oct 22, 2021 6:47:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:47:45.411Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 22, 2021 6:48:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:48:18.726Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 22, 2021 6:48:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:48:43.486Z: Workers have started successfully.
    Oct 22, 2021 6:48:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:48:43.511Z: Workers have started successfully.
    Oct 22, 2021 6:49:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:49:10.552Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 22, 2021 6:49:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:49:10.691Z: Cleaning up.
    Oct 22, 2021 6:49:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:49:10.754Z: Stopping worker pool...
    Oct 22, 2021 6:51:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:51:43.519Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 22, 2021 6:51:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T06:51:43.558Z: Worker pool stopped.
    Oct 22, 2021 6:51:50 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-21_23_47_28-9679235479846533736 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): b7b4b9d2-734b-4fe4-8daf-1f122bd0e0a0 and timestamp: 2021-10-22T06:51:50.197000000Z:
                     Metric:                    Value:
                   read_time                     9.081
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 22, 2021 6:51:50 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 43.255 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 32s
152 actionable tasks: 103 executed, 49 from cache

Publishing build scan...
https://gradle.com/s/ot5c4djvxs4qa

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2574

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2574/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13096] Double test timeout. (#15774)

[noreply] [BEAM-13019] Add `containsInAnyOrder` with matchers to the

[Daniel Oliveira] Avoiding read-only Go module cache in Gradle config.

[noreply] [BEAM-11758] Update basics page: Pipeline, PCollection, PTransform

[noreply] Test SetState addIfAbsent with no read (#15776)

[noreply] lazy creation of source splits for export-based ReadFromBigQuery

[noreply] [BEAM-11275] Support remote package download from remote filesystems in


------------------------------------------
[...truncated 351.17 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 4 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 599dc4d4ff281af8baf69fcffab042b0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 4'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 4'
Successfully started process 'Gradle Test Executor 4'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 22, 2021 12:46:12 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 22, 2021 12:46:13 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 22, 2021 12:46:13 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 22, 2021 12:46:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 12:46:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 12:46:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 22, 2021 12:46:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 12:46:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 12:46:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 22, 2021 12:46:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@998764102]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 22, 2021 12:46:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 22, 2021 12:46:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 22, 2021 12:46:22 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 22, 2021 12:46:22 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT--WHsoFYQMBW8_l9JJpFNEsBwzlE1JcaCn3ndyvhKrfw.jar
    Oct 22, 2021 12:46:22 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3460714990117420804.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ODC0xluqdab86JEn1B721eorFDaytoXALNi2C6k4G18.jar
    Oct 22, 2021 12:46:23 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 22, 2021 12:46:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 22, 2021 12:46:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash f6f94733fa2e5056c93585100b93bd7f5d8cff4613ed7f3f07dca88b5a061231> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-9vlHM_ouUFbJNYUQC5O9f12M_0YT7X8_B9yoi1oGEjE.pb
    Oct 22, 2021 12:46:25 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 22, 2021 12:46:25 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 22, 2021 12:46:25 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 22, 2021 12:46:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 22, 2021 12:46:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-21_17_46_26-4390393042301835280?project=apache-beam-testing
    Oct 22, 2021 12:46:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-21_17_46_26-4390393042301835280
    Oct 22, 2021 12:46:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-21_17_46_26-4390393042301835280
    Oct 22, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-22T00:46:29.325Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 22, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:46:34.487Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 22, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:46:35.134Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 22, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:46:35.177Z: Expanding GroupByKey operations into optimizable parts.
    Oct 22, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:46:35.213Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 22, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:46:35.267Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 22, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:46:35.293Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 22, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:46:35.326Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 22, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:46:35.814Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 22, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:46:35.883Z: Starting 5 workers in us-central1-a...
    Oct 22, 2021 12:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:47:06.842Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 22, 2021 12:47:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:47:21.841Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 22, 2021 12:47:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:47:46.820Z: Workers have started successfully.
    Oct 22, 2021 12:47:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:47:46.855Z: Workers have started successfully.
    Oct 22, 2021 12:48:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:48:12.430Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 22, 2021 12:48:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:48:12.614Z: Cleaning up.
    Oct 22, 2021 12:48:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:48:12.688Z: Stopping worker pool...
    Oct 22, 2021 12:50:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:50:34.470Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 22, 2021 12:50:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-22T00:50:34.512Z: Worker pool stopped.
    Oct 22, 2021 12:50:40 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-21_17_46_26-4390393042301835280 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 676c8fd1-bd33-4cde-ae5a-c7858b22578f and timestamp: 2021-10-22T00:50:40.154000000Z:
                     Metric:                    Value:
                   read_time                     6.588
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 22, 2021 12:50:40 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 4 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 31.934 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 22s
152 actionable tasks: 104 executed, 48 from cache

Publishing build scan...
https://gradle.com/s/6nye6wefmwwmk

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2573

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2573/display/redirect?page=changes>

Changes:

[noreply] Merge pull request #15761 from [BEAM-13008] Create gradle tasks for the


------------------------------------------
[...truncated 337.25 KB...]
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 20 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 892951abc16f3e27e8ef159071a75a5f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 20'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 20'
Successfully started process 'Gradle Test Executor 20'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 21, 2021 6:44:39 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 21, 2021 6:44:39 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 21, 2021 6:44:40 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 21, 2021 6:44:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 6:44:42 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 6:44:43 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 21, 2021 6:44:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 6:44:43 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 6:44:44 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 21, 2021 6:44:44 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1800141971]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@878870100]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 21, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 21, 2021 6:44:46 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 21, 2021 6:44:48 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 21, 2021 6:44:49 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT--WHsoFYQMBW8_l9JJpFNEsBwzlE1JcaCn3ndyvhKrfw.jar
    Oct 21, 2021 6:44:49 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3642596745687179173.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-uRkyJNgXMZYc24k-56QT74P95CK0zumxHkYMPOfLEsI.jar
    Oct 21, 2021 6:44:49 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mongodb/mongo-java-driver/3.12.7/1f45c6a397feeb46da75425619333d1cc6f90f78/mongo-java-driver-3.12.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/mongo-java-driver-3.12.7-D_zgBJWNb9mzmuetJ37a0X9XtpcfSGsXYpxe6eE8Tao.jar
    Oct 21, 2021 6:44:49 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Oct 21, 2021 6:44:49 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 21, 2021 6:44:49 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 434bd57989a4804df7b209bed9a78276a311e902920425d95a583992789e8cc5> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Q0vVeYmkgE33sgm-2aeCdqMR6QKSBCXZWlg5kniejMU.pb
    Oct 21, 2021 6:44:51 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 21, 2021 6:44:52 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 21, 2021 6:44:52 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 21, 2021 6:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 21, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-21_11_44_52-4963134360258444781?project=apache-beam-testing
    Oct 21, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-21_11_44_52-4963134360258444781
    Oct 21, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-21_11_44_52-4963134360258444781
    Oct 21, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-21T18:44:58.683Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 21, 2021 6:45:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:45:10.151Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 21, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:45:11.010Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 21, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:45:11.050Z: Expanding GroupByKey operations into optimizable parts.
    Oct 21, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:45:11.088Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 21, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:45:11.161Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 21, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:45:11.197Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 21, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:45:11.248Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 21, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:45:11.647Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 21, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:45:11.729Z: Starting 5 workers in us-central1-a...
    Oct 21, 2021 6:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:45:28.265Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 21, 2021 6:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:46:02.787Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 21, 2021 6:46:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:46:32.843Z: Workers have started successfully.
    Oct 21, 2021 6:46:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:46:32.883Z: Workers have started successfully.
    Oct 21, 2021 6:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:47:01.075Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 21, 2021 6:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:47:01.232Z: Cleaning up.
    Oct 21, 2021 6:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:47:01.323Z: Stopping worker pool...
    Oct 21, 2021 6:49:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:49:24.255Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 21, 2021 6:49:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T18:49:24.299Z: Worker pool stopped.
    Oct 21, 2021 6:49:32 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-21_11_44_52-4963134360258444781 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): fe052573-83ba-45ca-bf4a-21d532d4f6c6 and timestamp: 2021-10-21T18:49:32.568000000Z:
                     Metric:                    Value:
                   read_time                     7.605
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 21, 2021 6:49:32 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 20 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.005 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.003 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 57.376 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 13s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/pr2d5zvbfw5fo

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2572

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2572/display/redirect>

Changes:


------------------------------------------
[...truncated 338.71 KB...]
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 892951abc16f3e27e8ef159071a75a5f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 21, 2021 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 21, 2021 12:44:59 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 21, 2021 12:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 21, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 21, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 21, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1658417926]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 21, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 21, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 21, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 21, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 21, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT--WHsoFYQMBW8_l9JJpFNEsBwzlE1JcaCn3ndyvhKrfw.jar
    Oct 21, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8379026470928085335.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-TE4v05Iz6GNk3e0phHD1R9aQIyP7M1ykfvxs8yBmDKU.jar
    Oct 21, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/google-cloud-bigtable-emulator/0.137.1/b1639aa134de1302e43d9e9c3843f6ff853c510f/google-cloud-bigtable-emulator-0.137.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-cloud-bigtable-emulator-0.137.1-V2SlLU4BjIGf5QoF7YL-A-QNjw49NdXrNM4QoX9MXSI.jar
    Oct 21, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 2 seconds
    Oct 21, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 21, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 1b19fec33e65c08c8c8303eb061d3c9c2dd685a81eceaba550c5364f9dae2769> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Gxn-wz5lwIyMgwPrBh08nC3WhagezqulUMU2T52uJ2k.pb
    Oct 21, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 21, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 21, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 21, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 21, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-21_05_45_13-14069682441316332178?project=apache-beam-testing
    Oct 21, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-21_05_45_13-14069682441316332178
    Oct 21, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-21_05_45_13-14069682441316332178
    Oct 21, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-21T12:45:20.013Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 21, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:45:30.351Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 21, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:45:31.177Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 21, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:45:31.219Z: Expanding GroupByKey operations into optimizable parts.
    Oct 21, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:45:31.247Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 21, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:45:31.329Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 21, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:45:31.358Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 21, 2021 12:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:45:31.392Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 21, 2021 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:45:31.728Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 21, 2021 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:45:31.782Z: Starting 5 workers in us-central1-c...
    Oct 21, 2021 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:45:40.351Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 21, 2021 12:46:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:46:17.465Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 21, 2021 12:46:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:46:41.541Z: Workers have started successfully.
    Oct 21, 2021 12:46:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:46:41.573Z: Workers have started successfully.
    Oct 21, 2021 12:47:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:47:11.775Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 21, 2021 12:47:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:47:13.356Z: Cleaning up.
    Oct 21, 2021 12:47:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:47:13.985Z: Stopping worker pool...
    Oct 21, 2021 12:49:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:49:50.461Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 21, 2021 12:49:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T12:49:50.495Z: Worker pool stopped.
    Oct 21, 2021 12:49:58 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-21_05_45_13-14069682441316332178 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): ed88d379-c53c-4e1f-9e77-65aa610091e6 and timestamp: 2021-10-21T12:49:58.063000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.408

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 21, 2021 12:49:58 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 5 mins 3.522 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 37s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/s5cxprhwp2xm4

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2571

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2571/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13082] Re-use dataWriter buffer. (#15762)


------------------------------------------
[...truncated 340.96 KB...]
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 21, 2021 6:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 21, 2021 6:44:57 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 21, 2021 6:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 21, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 21, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 21, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 21, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 21, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 21, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 21, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 21, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT--WHsoFYQMBW8_l9JJpFNEsBwzlE1JcaCn3ndyvhKrfw.jar
    Oct 21, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7834221037006516401.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-315tDq8MFlGE39qB9vOsf-KL5uqIG9cUkUdmALX2Dto.jar
    Oct 21, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT-h5Vm59YQQa-RcmHCCG09Iw6skDekxZqFrPsGtnvUHB4.jar
    Oct 21, 2021 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT-tests-I-osQrG-lSEES6RD4ibuovIeVl-_U3nOEEsBLo1rYrA.jar
    Oct 21, 2021 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/join-library/build/libs/beam-sdks-java-extensions-join-library-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-join-library-2.35.0-SNAPSHOT-xz-aSlhizJi8jigGo3amZ2g_MkglE3dBulicsi2UX5U.jar
    Oct 21, 2021 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/udf/build/libs/beam-sdks-java-extensions-sql-udf-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-udf-2.35.0-SNAPSHOT-MlQQAS1Je1cH4lXtIOKM3zL6o97S5F7sJQ3zUailw08.jar
    Oct 21, 2021 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 245 files cached, 5 files newly uploaded in 0 seconds
    Oct 21, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 21, 2021 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 2d91f240d883e03925133702491be8ea0739bdf9a51e0a49766164d722a58df5> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-LZHyQNiD4DklEzcCSRvo6gc5vfmlHgpJdmFk1yKljfU.pb
    Oct 21, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 21, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 21, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 21, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 21, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-20_23_45_11-8989935288718978231?project=apache-beam-testing
    Oct 21, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-20_23_45_11-8989935288718978231
    Oct 21, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-20_23_45_11-8989935288718978231
    Oct 21, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-21T06:45:15.376Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 21, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:45:26.088Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 21, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:45:26.892Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 21, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:45:26.940Z: Expanding GroupByKey operations into optimizable parts.
    Oct 21, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:45:26.989Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 21, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:45:27.061Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 21, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:45:27.096Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 21, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:45:27.130Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 21, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:45:27.765Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 21, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:45:27.879Z: Starting 5 workers in us-central1-a...
    Oct 21, 2021 6:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:45:52.870Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 21, 2021 6:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:45:58.879Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Oct 21, 2021 6:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:45:58.907Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Oct 21, 2021 6:46:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:46:09.225Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 21, 2021 6:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:46:32.447Z: Workers have started successfully.
    Oct 21, 2021 6:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:46:32.478Z: Workers have started successfully.
    Oct 21, 2021 6:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:47:01.777Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 21, 2021 6:47:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:47:04.149Z: Cleaning up.
    Oct 21, 2021 6:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:47:05.375Z: Stopping worker pool...
    Oct 21, 2021 6:49:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:49:21.982Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 21, 2021 6:49:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T06:49:22.025Z: Worker pool stopped.
    Oct 21, 2021 6:49:31 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-20_23_45_11-8989935288718978231 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 594fe433-3119-4656-9f63-07797c0bb092 and timestamp: 2021-10-21T06:49:31.585000000Z:
                     Metric:                    Value:
                   read_time                     8.516
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 21, 2021 6:49:32 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.194 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 38.887 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 11s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/dqhroygbgfunk

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2570

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2570/display/redirect>

Changes:


------------------------------------------
[...truncated 338.98 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 892951abc16f3e27e8ef159071a75a5f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 21, 2021 12:44:50 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 21, 2021 12:44:50 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 21, 2021 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 21, 2021 12:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 12:44:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 12:44:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 21, 2021 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 12:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 21, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 21, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 21, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 21, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 21, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@311251360]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 21, 2021 12:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 21, 2021 12:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 21, 2021 12:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT--WHsoFYQMBW8_l9JJpFNEsBwzlE1JcaCn3ndyvhKrfw.jar
    Oct 21, 2021 12:44:59 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2124775571116606532.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-nterUiI2-QH3mHeIdZAyFwhgUF_9uFKQqGWbkF4Fxo8.jar
    Oct 21, 2021 12:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 21, 2021 12:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 21, 2021 12:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 669af0c6eae15c39bbb14ec0f8e2a0e04a606d2b0f52ef42011992ae747c0303> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-ZprwxurhXDm7sU7A-OKg4EpgbSsPUu9CARmSrnR8AwM.pb
    Oct 21, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 21, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 21, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 21, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 21, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-20_17_45_02-2472126720864607938?project=apache-beam-testing
    Oct 21, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-20_17_45_02-2472126720864607938
    Oct 21, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-20_17_45_02-2472126720864607938
    Oct 21, 2021 12:45:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-21T00:45:06.002Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:45:14.206Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:45:15.016Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:45:15.210Z: Expanding GroupByKey operations into optimizable parts.
    Oct 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:45:15.368Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:45:15.546Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:45:15.588Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:45:15.623Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:45:15.984Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:45:16.063Z: Starting 5 workers in us-central1-c...
    Oct 21, 2021 12:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:45:24.238Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 21, 2021 12:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:46:00.047Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 21, 2021 12:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:46:26.516Z: Workers have started successfully.
    Oct 21, 2021 12:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:46:26.534Z: Workers have started successfully.
    Oct 21, 2021 12:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:46:56.310Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 21, 2021 12:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:46:56.504Z: Cleaning up.
    Oct 21, 2021 12:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:46:56.585Z: Stopping worker pool...
    Oct 21, 2021 12:49:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:49:29.191Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 21, 2021 12:49:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-21T00:49:29.226Z: Worker pool stopped.
    Oct 21, 2021 12:49:36 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-20_17_45_02-2472126720864607938 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 5fcbf8e9-5380-48fa-ae42-217760f9e046 and timestamp: 2021-10-21T00:49:36.096000000Z:
                     Metric:                    Value:
                   read_time                     7.707
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 21, 2021 12:49:36 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.045 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 49.945 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 19s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/esybkskpsgodm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2569

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2569/display/redirect?page=changes>

Changes:

[noreply] Add -XX:+AlwaysActAsServerClassMachine to Java SDK container

[moritz] adhoc: Minor update to flink runner docs

[noreply] [BEAM-11087] Add default WindowMappingFn from Main to Side Input


------------------------------------------
[...truncated 337.20 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 892951abc16f3e27e8ef159071a75a5f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 20, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 20, 2021 6:44:55 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 20, 2021 6:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 20, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 20, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 20, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 20, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 20, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 20, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 20, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 20, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1658417926]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 20, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 20, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 20, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 20, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 20, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 20, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 20, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT--WHsoFYQMBW8_l9JJpFNEsBwzlE1JcaCn3ndyvhKrfw.jar
    Oct 20, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4953441860378257756.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-wwuBlSm9n4Pv5-JlAPhgc-0Ov7i4ymlRWbLFhSEKbdQ.jar
    Oct 20, 2021 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 20, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 20, 2021 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash c23bd765d27c9d90d242d2d6ff7805807bf512b1874a563c72322a3ee8788413> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-wjvXZdJ8nZDSQtLW_3gFgHv1ErGHSlY8cjIqPuh4hBM.pb
    Oct 20, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 20, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 20, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 20, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 20, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-20_11_45_08-3573852219966413355?project=apache-beam-testing
    Oct 20, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-20_11_45_08-3573852219966413355
    Oct 20, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-20_11_45_08-3573852219966413355
    Oct 20, 2021 6:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-20T18:45:12.135Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 20, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:45:17.818Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 20, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:45:18.620Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 20, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:45:18.653Z: Expanding GroupByKey operations into optimizable parts.
    Oct 20, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:45:18.688Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 20, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:45:18.764Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 20, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:45:18.875Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 20, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:45:18.933Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 20, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:45:19.340Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 20, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:45:19.406Z: Starting 5 workers in us-central1-a...
    Oct 20, 2021 6:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:45:30.548Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 20, 2021 6:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:46:03.797Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 20, 2021 6:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:46:29.177Z: Workers have started successfully.
    Oct 20, 2021 6:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:46:29.219Z: Workers have started successfully.
    Oct 20, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:46:57.372Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 20, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:46:57.503Z: Cleaning up.
    Oct 20, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:46:57.573Z: Stopping worker pool...
    Oct 20, 2021 6:49:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:49:23.303Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 20, 2021 6:49:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T18:49:23.341Z: Worker pool stopped.
    Oct 20, 2021 6:49:30 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-20_11_45_08-3573852219966413355 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 81ac351d-f904-4eb7-840f-c7e588ede68e and timestamp: 2021-10-20T18:49:30.690000000Z:
                     Metric:                    Value:
                   read_time                     7.473
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 20, 2021 6:49:31 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 39.801 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 11s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/wbdln2o2wfpai

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2568

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2568/display/redirect?page=changes>

Changes:

[egalpin] [BEAM-10990] Adds response filtering for ElasticsearchIO

[egalpin] [BEAM-5172] Tries to reduce ES uTest flakiness


------------------------------------------
[...truncated 339.62 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 892951abc16f3e27e8ef159071a75a5f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 20, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 20, 2021 12:44:56 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 20, 2021 12:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 20, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 20, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 20, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 20, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 20, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 20, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 20, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 20, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1658417926]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 20, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 20, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 20, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 20, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 20, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 20, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 20, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT--WHsoFYQMBW8_l9JJpFNEsBwzlE1JcaCn3ndyvhKrfw.jar
    Oct 20, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6611324795219753643.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-r046PcstO7Q5G2sH6wsoNmJRJf1URpfZPijXmT13LZ0.jar
    Oct 20, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 20, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 20, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash fc6bfe19c5e9bd96d5367a5d8a9ccea238fc0cfddbd0e3e45dc8622d758b0176> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-_Gv-GcXpvZbVNnpdipzOojj8DP3b0OPkXchiLXWLAXY.pb
    Oct 20, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 20, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 20, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 20, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 20, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-20_05_45_08-10224540884455294911?project=apache-beam-testing
    Oct 20, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-20_05_45_08-10224540884455294911
    Oct 20, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-20_05_45_08-10224540884455294911
    Oct 20, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-20T12:45:13.695Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 20, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:45:21.898Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 20, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:45:22.642Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 20, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:45:22.673Z: Expanding GroupByKey operations into optimizable parts.
    Oct 20, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:45:22.690Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 20, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:45:22.751Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 20, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:45:22.786Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 20, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:45:22.821Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 20, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:45:23.167Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 20, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:45:23.241Z: Starting 5 workers in us-central1-a...
    Oct 20, 2021 12:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:45:54.770Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 20, 2021 12:46:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:46:05.246Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Oct 20, 2021 12:46:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:46:05.275Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Oct 20, 2021 12:46:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:46:15.532Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 20, 2021 12:46:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:46:39.173Z: Workers have started successfully.
    Oct 20, 2021 12:46:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:46:39.241Z: Workers have started successfully.
    Oct 20, 2021 12:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:47:06.657Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 20, 2021 12:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:47:06.777Z: Cleaning up.
    Oct 20, 2021 12:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:47:06.854Z: Stopping worker pool...
    Oct 20, 2021 12:49:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:49:26.793Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 20, 2021 12:49:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T12:49:26.834Z: Worker pool stopped.
    Oct 20, 2021 12:49:37 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-20_05_45_08-10224540884455294911 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6591de82-0db9-4c10-bdc8-e23aa36d9e57 and timestamp: 2021-10-20T12:49:37.334000000Z:
                     Metric:                    Value:
                   read_time                      8.03
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 20, 2021 12:49:37 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 46.337 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 20s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/mnrkfgsl3g37q

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2567

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2567/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13079] Updates cross-language transform URNs to use the new


------------------------------------------
[...truncated 340.51 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 892951abc16f3e27e8ef159071a75a5f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 20, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 20, 2021 6:45:13 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 20, 2021 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 20, 2021 6:45:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 6:45:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 20, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 20, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 20, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 20, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 20, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 20, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 20, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1658417926]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 20, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 20, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 20, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 20, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 20, 2021 6:45:20 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 20, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 20, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT--WHsoFYQMBW8_l9JJpFNEsBwzlE1JcaCn3ndyvhKrfw.jar
    Oct 20, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8917328065085856496.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-BUzLisP-f9L9NOQlKVRgwG3C7mUCwosJYwR2s31Vci8.jar
    Oct 20, 2021 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 20, 2021 6:45:24 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 20, 2021 6:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104889 bytes, hash 234003c919b874b18128bc0f374052d0d43234de3ab0a1f23af8a2f3be09eb26> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-I0ADyRm4dLGBKLwPN0BS0NQyNN46sKHyOvii874J6yY.pb
    Oct 20, 2021 6:45:26 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 20, 2021 6:45:26 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 20, 2021 6:45:26 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 20, 2021 6:45:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 20, 2021 6:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-19_23_45_26-13242410420494433170?project=apache-beam-testing
    Oct 20, 2021 6:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-19_23_45_26-13242410420494433170
    Oct 20, 2021 6:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-19_23_45_26-13242410420494433170
    Oct 20, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-20T06:45:29.994Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 20, 2021 6:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:45:36.462Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 20, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:45:37.203Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 20, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:45:37.234Z: Expanding GroupByKey operations into optimizable parts.
    Oct 20, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:45:37.266Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 20, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:45:37.342Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 20, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:45:37.360Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 20, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:45:37.388Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 20, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:45:37.724Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 20, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:45:37.802Z: Starting 5 workers in us-central1-c...
    Oct 20, 2021 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:45:41.426Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 20, 2021 6:46:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:46:17.084Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 20, 2021 6:46:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:46:45.152Z: Workers have started successfully.
    Oct 20, 2021 6:46:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:46:45.183Z: Workers have started successfully.
    Oct 20, 2021 6:47:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:47:11.341Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 20, 2021 6:47:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:47:11.474Z: Cleaning up.
    Oct 20, 2021 6:47:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:47:11.552Z: Stopping worker pool...
    Oct 20, 2021 6:49:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:49:41.576Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 20, 2021 6:49:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T06:49:41.613Z: Worker pool stopped.
    Oct 20, 2021 6:49:49 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-19_23_45_26-13242410420494433170 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): a633df80-3971-4763-a09f-c53ef755a0d5 and timestamp: 2021-10-20T06:49:49.886000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.396

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 20, 2021 6:49:50 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 41.567 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 30s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/2fdnoeb542y6s

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2566

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2566/display/redirect?page=changes>

Changes:

[aydar.zaynutdinov] [BEAM_13077][Playground]

[Kyle Weaver] [BEAM-13055] Use unshallow clone to create PR.


------------------------------------------
[...truncated 337.30 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) started.
Gradle Test Executor 22 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 1ee7405c9139a4bb4156d477fce72840
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 22'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 22'
Successfully started process 'Gradle Test Executor 22'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 20, 2021 12:44:37 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 20, 2021 12:44:38 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 20, 2021 12:44:38 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 20, 2021 12:44:41 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 12:44:41 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 12:44:42 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 20, 2021 12:44:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 12:44:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 12:44:42 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 20, 2021 12:44:42 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@998764102]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 20, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 20, 2021 12:44:44 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 20, 2021 12:44:44 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 20, 2021 12:44:44 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 20, 2021 12:44:47 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 20, 2021 12:44:47 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT--WHsoFYQMBW8_l9JJpFNEsBwzlE1JcaCn3ndyvhKrfw.jar
    Oct 20, 2021 12:44:47 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2272615731597150945.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-kYWMcOTm0XIsEWIM4Ol1SBRjQnmFat_Wj2tv4KZ1bZk.jar
    Oct 20, 2021 12:44:47 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 20, 2021 12:44:47 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 20, 2021 12:44:47 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104892 bytes, hash b9b5321a17b2bc66c6b0050faf51815675d5f9c28f027f528ee5b77879668813> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-ubUyGheyvGbGsAUPr1GBVnXV-cKPAn9SjuW3eHlmiBM.pb
    Oct 20, 2021 12:44:50 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 20, 2021 12:44:50 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 20, 2021 12:44:50 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 20, 2021 12:44:50 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 20, 2021 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-19_17_44_50-3602477040614239517?project=apache-beam-testing
    Oct 20, 2021 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-19_17_44_50-3602477040614239517
    Oct 20, 2021 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-19_17_44_50-3602477040614239517
    Oct 20, 2021 12:44:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-20T00:44:53.632Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 20, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:44:59.455Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 20, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:45:00.180Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 20, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:45:00.218Z: Expanding GroupByKey operations into optimizable parts.
    Oct 20, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:45:00.236Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 20, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:45:00.320Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 20, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:45:00.358Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 20, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:45:00.401Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 20, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:45:00.776Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 20, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:45:00.857Z: Starting 5 workers in us-central1-a...
    Oct 20, 2021 12:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:45:08.920Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 20, 2021 12:45:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:45:45.969Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 20, 2021 12:46:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:46:13.169Z: Workers have started successfully.
    Oct 20, 2021 12:46:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:46:13.196Z: Workers have started successfully.
    Oct 20, 2021 12:46:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:46:42.679Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 20, 2021 12:46:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:46:42.829Z: Cleaning up.
    Oct 20, 2021 12:46:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:46:42.918Z: Stopping worker pool...
    Oct 20, 2021 12:49:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:49:12.396Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 20, 2021 12:49:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-20T00:49:12.441Z: Worker pool stopped.
    Oct 20, 2021 12:49:21 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-19_17_44_50-3602477040614239517 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f58ef3f3-1bed-491e-8093-33f3c6ffb2d4 and timestamp: 2021-10-20T00:49:21.122000000Z:
                     Metric:                    Value:
                   read_time                     9.419
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 20, 2021 12:49:22 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 22 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.008 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.005 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 48.822 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 4s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/wl436vf63hb4o

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2565

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2565/display/redirect?page=changes>

Changes:

[samuelw] [BEAM-13042] Prevent unexpected blocking in

[ilya.kozyrev] Implement initial server with grpcweb wrapper

[noreply] [BEAM-13068] Add a SQL API in Beam Go SDK (#15746)


------------------------------------------
[...truncated 337.69 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 1ee7405c9139a4bb4156d477fce72840
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 19, 2021 6:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 19, 2021 6:44:52 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 19, 2021 6:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 19, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 19, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 19, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@311251360]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 19, 2021 6:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 19, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 19, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT--WHsoFYQMBW8_l9JJpFNEsBwzlE1JcaCn3ndyvhKrfw.jar
    Oct 19, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5615582546243157905.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-IcjEkvDtPEUdA2JV050sKrYn2pTUlgW6qSHEUJUzJus.jar
    Oct 19, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 19, 2021 6:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 19, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 87a4e79e7a021c54426af1539bbc715d41337c6f069592b50b02e128e3bc45aa> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-h6TnnnoCHFRCavFTm7xxXUEzfG8GlZK1CwLhKOO8Rao.pb
    Oct 19, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 19, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 19, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 19, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 19, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-19_11_45_04-836460464704131324?project=apache-beam-testing
    Oct 19, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-19_11_45_04-836460464704131324
    Oct 19, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-19_11_45_04-836460464704131324
    Oct 19, 2021 6:45:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-19T18:45:07.328Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 19, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:45:14.609Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 19, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:45:15.449Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 19, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:45:15.490Z: Expanding GroupByKey operations into optimizable parts.
    Oct 19, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:45:15.517Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 19, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:45:15.596Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 19, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:45:15.629Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 19, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:45:15.660Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 19, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:45:15.985Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 19, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:45:16.057Z: Starting 5 workers in us-central1-c...
    Oct 19, 2021 6:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:45:30.759Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 19, 2021 6:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:46:03.527Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 19, 2021 6:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:46:29.563Z: Workers have started successfully.
    Oct 19, 2021 6:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:46:29.584Z: Workers have started successfully.
    Oct 19, 2021 6:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:46:59.056Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 19, 2021 6:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:46:59.252Z: Cleaning up.
    Oct 19, 2021 6:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:46:59.327Z: Stopping worker pool...
    Oct 19, 2021 6:49:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:49:20.897Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 19, 2021 6:49:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T18:49:20.946Z: Worker pool stopped.
    Oct 19, 2021 6:49:27 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-19_11_45_04-836460464704131324 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): bc91b89d-1261-499a-9050-fbed9b644e5c and timestamp: 2021-10-19T18:49:27.291000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.843

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 19, 2021 6:49:27 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 39.784 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 9s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/jlwvg75yr6uju

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2564

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2564/display/redirect>

Changes:


------------------------------------------
[...truncated 337.93 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is f4f5b1c924385b744485e1396a8bcb8d
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 19, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 19, 2021 12:45:03 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 19, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 19, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 19, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 19, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 19, 2021 12:45:10 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 19, 2021 12:45:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 19, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 19, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 19, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-OVq9X9VVMbYHloIn9ooNsMDjnicWyAJh7yetnGKL094.jar
    Oct 19, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2748212382736978675.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-N8KoSoQtT6vbUsWpaoEOs8e_cJzVLbDARn6DKF-2x1E.jar
    Oct 19, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 19, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 19, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 42cfe120303cc00951dccd9a04c9e10967a88cea4f7c24d6bcd6bf9178ef3278> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Qs_hIDA8wAlR3M2aBMnhCWeojOpPfCTWvNa_kXjvMng.pb
    Oct 19, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 19, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 19, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 19, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 19, 2021 12:45:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-19_05_45_18-12249500119034946618?project=apache-beam-testing
    Oct 19, 2021 12:45:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-19_05_45_18-12249500119034946618
    Oct 19, 2021 12:45:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-19_05_45_18-12249500119034946618
    Oct 19, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-19T12:45:21.321Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 19, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:45:28.444Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 19, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:45:29.255Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 19, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:45:29.303Z: Expanding GroupByKey operations into optimizable parts.
    Oct 19, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:45:29.339Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 19, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:45:29.410Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 19, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:45:29.436Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 19, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:45:29.471Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 19, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:45:29.825Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 19, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:45:29.894Z: Starting 5 workers in us-central1-c...
    Oct 19, 2021 12:45:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:45:45.102Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 19, 2021 12:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:46:10.938Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 19, 2021 12:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:46:37.268Z: Workers have started successfully.
    Oct 19, 2021 12:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:46:37.300Z: Workers have started successfully.
    Oct 19, 2021 12:47:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:47:08.182Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 19, 2021 12:47:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:47:08.340Z: Cleaning up.
    Oct 19, 2021 12:47:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:47:08.430Z: Stopping worker pool...
    Oct 19, 2021 12:49:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:49:26.434Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 19, 2021 12:49:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T12:49:26.476Z: Worker pool stopped.
    Oct 19, 2021 12:49:31 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-19_05_45_18-12249500119034946618 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8dcd84d4-bc07-4c2d-88e0-67659763516c and timestamp: 2021-10-19T12:49:31.637000000Z:
                     Metric:                    Value:
                   read_time                     9.295
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 19, 2021 12:49:32 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 33.69 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 12s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/e2lhsypzg2bbo

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2563

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2563/display/redirect>

Changes:


------------------------------------------
[...truncated 349.53 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is f4f5b1c924385b744485e1396a8bcb8d
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 19, 2021 6:45:24 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 19, 2021 6:45:25 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 19, 2021 6:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 19, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 6:45:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 19, 2021 6:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 6:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 6:45:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 19, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 19, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 19, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 19, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 19, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 19, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 19, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 19, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 19, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 19, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 19, 2021 6:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 19, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 19, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-OVq9X9VVMbYHloIn9ooNsMDjnicWyAJh7yetnGKL094.jar
    Oct 19, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gradle-worker-exQY3haih5oBrslwMoodif_IANoNjE91ELYD1GlFdlE.jar
    Oct 19, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1336539546458498927.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-PLLAMZcJ66Bz2b5k1BjEqH_pF62D-o_-W-mc-rvASdo.jar
    Oct 19, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Oct 19, 2021 6:45:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 19, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104889 bytes, hash 1581248cac0dc35037e45fc0251d1382f50dcbd70106c40d4e7377f33a1e3086> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-FYEkjKwNw1A35F_AJR0TgvUNy9cBBsQNTnN38zoeMIY.pb
    Oct 19, 2021 6:45:37 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 19, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 19, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 19, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 19, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-18_23_45_38-3754304436880156814?project=apache-beam-testing
    Oct 19, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-18_23_45_38-3754304436880156814
    Oct 19, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-18_23_45_38-3754304436880156814
    Oct 19, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-19T06:45:41.418Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 19, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:45:50.006Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 19, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:45:50.920Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 19, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:45:50.965Z: Expanding GroupByKey operations into optimizable parts.
    Oct 19, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:45:51.003Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 19, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:45:51.082Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 19, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:45:51.113Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 19, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:45:51.148Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 19, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:45:51.517Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 19, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:45:51.594Z: Starting 5 workers in us-central1-c...
    Oct 19, 2021 6:46:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:46:03.181Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 19, 2021 6:46:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:46:37.326Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 19, 2021 6:47:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:47:02.960Z: Workers have started successfully.
    Oct 19, 2021 6:47:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:47:02.988Z: Workers have started successfully.
    Oct 19, 2021 6:47:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:47:32.188Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 19, 2021 6:47:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:47:32.342Z: Cleaning up.
    Oct 19, 2021 6:47:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:47:32.456Z: Stopping worker pool...
    Oct 19, 2021 6:49:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:49:54.916Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 19, 2021 6:49:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T06:49:54.965Z: Worker pool stopped.
    Oct 19, 2021 6:50:01 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-18_23_45_38-3754304436880156814 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 2b6a725e-98f9-47ff-9a19-5be8cbcbcb5d and timestamp: 2021-10-19T06:50:01.461000000Z:
                     Metric:                    Value:
                   read_time                     8.015
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 19, 2021 6:50:01 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 42.246 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 41s
152 actionable tasks: 101 executed, 51 from cache

Publishing build scan...
https://gradle.com/s/cvxvazwg3icq2

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2562

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2562/display/redirect?page=changes>

Changes:

[zhoufek] [BEAM-9487] Multiple Trigger.may_lose_data fixes

[zhoufek] [BEAM-9487] Remove CONDITION_NOT_GUARANTEED as potential data loss

[zhoufek] [BEAM-9487] Do AfterAny, AfterAll, and AfterEach checks properly (i.e.

[zhoufek] [BEAM-9487] Remove unused import

[zhoufek] [BEAM-9487] Reintroduce flag but do not use it

[zhoufek] [BEAM-9487] Add test that shows AfterCount finishing

[zhoufek] [BEAM-9487] Make _ParallelTriggerFn.may_finish clearer

[Robert Bradshaw] Revert "Merge pull request #15441 from [BEAM-8823] Make FnApiRunner work

[Robert Bradshaw] [BEAM-13040] Add some test cases enforcing side input waiting.

[Robert Bradshaw] lint

[noreply] [BEAM-13068] Add xlangx.DecodeStructPayload (#15741)

[Luke Cwik] [BEAM-13015] Implement a simplified cancellable blocking queue with


------------------------------------------
[...truncated 341.09 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is f4f5b1c924385b744485e1396a8bcb8d
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 19, 2021 12:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 19, 2021 12:45:15 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 19, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 19, 2021 12:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 12:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 12:45:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 19, 2021 12:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 12:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 12:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 19, 2021 12:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1166891154]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 19, 2021 12:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 19, 2021 12:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 12:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 19, 2021 12:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 19, 2021 12:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 12:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 19, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 19, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 19, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 19, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 19, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 19, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 19, 2021 12:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 19, 2021 12:45:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 19, 2021 12:45:24 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 19, 2021 12:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-OVq9X9VVMbYHloIn9ooNsMDjnicWyAJh7yetnGKL094.jar
    Oct 19, 2021 12:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5524549753855384375.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-sWNtJeWWpKwoUNQXkV8GLzKESmhJCO3dP3Ug-Pb0zgk.jar
    Oct 19, 2021 12:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gradle-worker-daLf2p_s1EbljV6ZoB5Pwlvau5Epx8e-pjTPx9ZHN24.jar
    Oct 19, 2021 12:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Oct 19, 2021 12:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 19, 2021 12:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 9dd18d5d81f3e7c52f74ea0adaed28e9f4b1ea92aa14ccde78496ecb379f0175> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-ndGNXYHz58UvdOoK2u0o6fSx6pKqFMzeeEluyzefAXU.pb
    Oct 19, 2021 12:45:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 19, 2021 12:45:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 19, 2021 12:45:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 19, 2021 12:45:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 19, 2021 12:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-18_17_45_28-6254941589366977919?project=apache-beam-testing
    Oct 19, 2021 12:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-18_17_45_28-6254941589366977919
    Oct 19, 2021 12:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-18_17_45_28-6254941589366977919
    Oct 19, 2021 12:45:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-19T00:45:32.604Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 19, 2021 12:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:45:40.450Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 19, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:45:42.080Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 19, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:45:42.545Z: Expanding GroupByKey operations into optimizable parts.
    Oct 19, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:45:42.749Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 19, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:45:43.237Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 19, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:45:43.338Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 19, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:45:43.413Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 19, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:45:43.831Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 19, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:45:43.920Z: Starting 5 workers in us-central1-a...
    Oct 19, 2021 12:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:46:00.304Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 19, 2021 12:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:46:27.089Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Oct 19, 2021 12:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:46:27.129Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Oct 19, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:46:37.417Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 19, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:46:59.043Z: Workers have started successfully.
    Oct 19, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:46:59.308Z: Workers have started successfully.
    Oct 19, 2021 12:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:47:26.386Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 19, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:47:26.543Z: Cleaning up.
    Oct 19, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:47:26.651Z: Stopping worker pool...
    Oct 19, 2021 12:49:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:49:50.191Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 19, 2021 12:49:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-19T00:49:50.240Z: Worker pool stopped.
    Oct 19, 2021 12:49:56 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-18_17_45_28-6254941589366977919 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): da72475d-6728-4bfb-bd77-fd47fecd2590 and timestamp: 2021-10-19T00:49:56.555000000Z:
                     Metric:                    Value:
                   read_time                     7.971
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 19, 2021 12:49:57 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 46.237 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 37s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/avqtonc5gtiia

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2561

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2561/display/redirect?page=changes>

Changes:

[brachipa] [BEAM-12393] sql support for Zeta Sql

[aydar.zaynutdinov] [BEAM-12988] [Playground]

[brachipa] [BEAM-12393] package private

[brachipa] [BEAM-12393] returning more generic interface

[noreply] [BEAM-13066] Produce abstract iterables from IterableCoder. (#15662)

[noreply] [BEAM-11936] Fix some errorprone warnings (#15648)


------------------------------------------
[...truncated 354.08 KB...]
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c03513f16af6f9c0fa78bc1eceeb0a94
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 5'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 5'
Successfully started process 'Gradle Test Executor 5'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 18, 2021 6:46:32 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 18, 2021 6:46:33 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 18, 2021 6:46:34 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 18, 2021 6:46:37 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 6:46:37 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 6:46:38 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 18, 2021 6:46:38 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 6:46:38 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 6:46:38 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 18, 2021 6:46:39 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1307226993]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 18, 2021 6:46:39 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 18, 2021 6:46:39 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 6:46:39 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 18, 2021 6:46:39 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 18, 2021 6:46:39 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 6:46:39 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 18, 2021 6:46:39 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1658417926]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 18, 2021 6:46:39 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 6:46:39 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 6:46:40 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 18, 2021 6:46:40 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 6:46:40 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 6:46:40 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 18, 2021 6:46:40 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 18, 2021 6:46:40 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 18, 2021 6:46:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 18, 2021 6:46:44 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 18, 2021 6:46:44 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-34H-ZQEas8ae8kKq4Q82eTMbi9XxDPW_7pDdqK04NMg.jar
    Oct 18, 2021 6:46:44 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6858594797609982854.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-jv2dbNcsYyYrcJ-5GARi3GxcrCYK91c1ABn-2Mk3M38.jar
    Oct 18, 2021 6:46:44 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-tests-ZlOfBnuV2qb30CsEbUAjwyScGcyyIS3WwPQrSSIXJqc.jar
    Oct 18, 2021 6:46:44 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-dF2DqJY8lyfxiqSRpdTBaZqBPtzja4rAbcld7GOsX0U.jar
    Oct 18, 2021 6:46:44 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT-tests-OSJn6rDYScnOqVtjqjWjn4keTL2V-mBSDXbhyjSfST0.jar
    Oct 18, 2021 6:46:44 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT-tDWGgdT04k0kcm0eO5adgiF1BfJHG8lQ1W4REGrrw0g.jar
    Oct 18, 2021 6:46:45 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 245 files cached, 5 files newly uploaded in 0 seconds
    Oct 18, 2021 6:46:45 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 18, 2021 6:46:45 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 2a468aeff22c4cf10963abd01eab003b8a9f0af434f6a50a8bff2ff8d2d58efd> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-KkaK7_IsTPEJY6vQHqsAO4qfCvQ09qUKi_8v-NLVjv0.pb
    Oct 18, 2021 6:46:47 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 18, 2021 6:46:47 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 18, 2021 6:46:47 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 18, 2021 6:46:47 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 18, 2021 6:46:48 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-18_11_46_47-1909107604599465188?project=apache-beam-testing
    Oct 18, 2021 6:46:48 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-18_11_46_47-1909107604599465188
    Oct 18, 2021 6:46:48 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-18_11_46_47-1909107604599465188
    Oct 18, 2021 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-18T18:46:51.177Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 18, 2021 6:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:47:03.204Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 18, 2021 6:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:47:03.931Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 18, 2021 6:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:47:03.975Z: Expanding GroupByKey operations into optimizable parts.
    Oct 18, 2021 6:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:47:03.999Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 18, 2021 6:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:47:04.084Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 18, 2021 6:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:47:04.117Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 18, 2021 6:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:47:04.145Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 18, 2021 6:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:47:04.496Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 18, 2021 6:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:47:04.567Z: Starting 5 workers in us-central1-a...
    Oct 18, 2021 6:47:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:47:15.389Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 18, 2021 6:47:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:47:49.045Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 18, 2021 6:48:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:48:13.669Z: Workers have started successfully.
    Oct 18, 2021 6:48:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:48:13.700Z: Workers have started successfully.
    Oct 18, 2021 6:48:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:48:44.589Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 18, 2021 6:48:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:48:44.749Z: Cleaning up.
    Oct 18, 2021 6:48:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:48:44.839Z: Stopping worker pool...
    Oct 18, 2021 6:51:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:51:03.857Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 18, 2021 6:51:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T18:51:03.927Z: Worker pool stopped.
    Oct 18, 2021 6:51:09 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-18_11_46_47-1909107604599465188 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 55134f57-78db-4fce-8df8-b23b3e6410d8 and timestamp: 2021-10-18T18:51:09.647000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     10.67

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 18, 2021 6:51:10 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 5 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 41.631 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 50s
152 actionable tasks: 106 executed, 46 from cache

Publishing build scan...
https://gradle.com/s/2bazexoimnpdu

Stopped 4 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2560

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2560/display/redirect>

Changes:


------------------------------------------
[...truncated 338.76 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8633430505ba229471184114133f41c0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 18, 2021 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 18, 2021 12:44:58 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 18, 2021 12:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 18, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 18, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 18, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 18, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 18, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 18, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 18, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 18, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-lUguzpOdbIqhvKY4twebC5GyWRVEFGdF9HdZahKjWp4.jar
    Oct 18, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5065385169657750161.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Q1xF6P6trhpdpeXV1hgArD3Gjfa-CT8z1llRph_VF9Y.jar
    Oct 18, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 18, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 18, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 9ca872d6ea340413b1bc05ce682bcc9789af752e1a9c23664d951a3dfb88a3b4> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-nKhy1uo0BBOxvAXOaCvMl4mvdS4anCNmTZUaPfuIo7Q.pb
    Oct 18, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 18, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 18, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 18, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 18, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-18_05_45_12-4118238192366139464?project=apache-beam-testing
    Oct 18, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-18_05_45_12-4118238192366139464
    Oct 18, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-18_05_45_12-4118238192366139464
    Oct 18, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-18T12:45:15.327Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 18, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:45:20.048Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 18, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:45:20.860Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 18, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:45:20.891Z: Expanding GroupByKey operations into optimizable parts.
    Oct 18, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:45:20.927Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 18, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:45:21.012Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 18, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:45:21.050Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 18, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:45:21.074Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 18, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:45:21.382Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 18, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:45:21.459Z: Starting 5 workers in us-central1-a...
    Oct 18, 2021 12:45:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:45:35.808Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 18, 2021 12:46:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:46:12.151Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 18, 2021 12:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:46:36.938Z: Workers have started successfully.
    Oct 18, 2021 12:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:46:36.997Z: Workers have started successfully.
    Oct 18, 2021 12:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:47:03.881Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 18, 2021 12:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:47:04.036Z: Cleaning up.
    Oct 18, 2021 12:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:47:04.106Z: Stopping worker pool...
    Oct 18, 2021 12:49:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:49:28.061Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 18, 2021 12:49:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T12:49:28.126Z: Worker pool stopped.
    Oct 18, 2021 12:49:34 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-18_05_45_12-4118238192366139464 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 38c408ed-d0af-4bda-8f12-97716f3a3d3a and timestamp: 2021-10-18T12:49:34.644000000Z:
                     Metric:                    Value:
                   read_time                     8.926
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 18, 2021 12:49:35 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 40.903 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 15s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/6oq3b3uiojn72

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2559

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2559/display/redirect>

Changes:


------------------------------------------
[...truncated 338.62 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8633430505ba229471184114133f41c0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 18, 2021 6:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 18, 2021 6:44:55 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 18, 2021 6:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 18, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 18, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 18, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 18, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 18, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 18, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 18, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 18, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@793252937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 18, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 18, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 18, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 18, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 18, 2021 6:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 18, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 18, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-lUguzpOdbIqhvKY4twebC5GyWRVEFGdF9HdZahKjWp4.jar
    Oct 18, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8892862234938683687.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-A256R58UDuU7T_Kj44mFhRwdy1BXOsFu_k7mnAkzcZk.jar
    Oct 18, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 18, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 18, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash aa0153db57e296b66d16585844a48039941dd33bdc2fc1c5bce3c30c04a28154> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-qgFT21filrZtFlhYRKSAOZQd0zvcL8HFvOPDDASigVQ.pb
    Oct 18, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 18, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 18, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 18, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 18, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-17_23_45_08-14910980539235606898?project=apache-beam-testing
    Oct 18, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-17_23_45_08-14910980539235606898
    Oct 18, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-17_23_45_08-14910980539235606898
    Oct 18, 2021 6:45:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-18T06:45:11.304Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 18, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:45:18.006Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 18, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:45:18.769Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 18, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:45:18.809Z: Expanding GroupByKey operations into optimizable parts.
    Oct 18, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:45:18.845Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 18, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:45:18.919Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 18, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:45:18.946Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 18, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:45:18.977Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 18, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:45:19.311Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 18, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:45:19.387Z: Starting 5 workers in us-central1-c...
    Oct 18, 2021 6:45:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:45:45.449Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 18, 2021 6:46:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:46:04.869Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 18, 2021 6:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:46:30.375Z: Workers have started successfully.
    Oct 18, 2021 6:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:46:30.401Z: Workers have started successfully.
    Oct 18, 2021 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:47:04.999Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 18, 2021 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:47:05.181Z: Cleaning up.
    Oct 18, 2021 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:47:05.256Z: Stopping worker pool...
    Oct 18, 2021 6:49:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:49:26.293Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 18, 2021 6:49:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T06:49:26.339Z: Worker pool stopped.
    Oct 18, 2021 6:49:31 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-17_23_45_08-14910980539235606898 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): e67691e8-5057-4838-b5d1-778f5a8004fa and timestamp: 2021-10-18T06:49:31.530000000Z:
                     Metric:                    Value:
                   read_time                    12.491
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 18, 2021 6:49:31 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 41.191 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 12s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ywwlwlsby576c

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2558

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2558/display/redirect>

Changes:


------------------------------------------
[...truncated 338.49 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8633430505ba229471184114133f41c0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 18, 2021 12:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 18, 2021 12:44:54 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 18, 2021 12:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 18, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 18, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 18, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 18, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 18, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 18, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 18, 2021 12:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 18, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 18, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-lUguzpOdbIqhvKY4twebC5GyWRVEFGdF9HdZahKjWp4.jar
    Oct 18, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5680631000516002864.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-2G4uw-dZrRoR_rTkYc7rieGKftzN7GNqYAUdtkYroFY.jar
    Oct 18, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 18, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 18, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash ce352ef078ff8045b673d2822f368fa5fa49da250362606542c96a52f9f46f2b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-zjUu8Hj_gEW2c9KCLzaPpfpJ2iUDYmBlQslqUvn0bys.pb
    Oct 18, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 18, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 18, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 18, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 18, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-17_17_45_06-1251233564020434467?project=apache-beam-testing
    Oct 18, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-17_17_45_06-1251233564020434467
    Oct 18, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-17_17_45_06-1251233564020434467
    Oct 18, 2021 12:45:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-18T00:45:09.322Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 18, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:45:16.109Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 18, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:45:16.938Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 18, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:45:16.994Z: Expanding GroupByKey operations into optimizable parts.
    Oct 18, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:45:17.025Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 18, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:45:17.087Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 18, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:45:17.113Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 18, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:45:17.147Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 18, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:45:17.486Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 18, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:45:17.559Z: Starting 5 workers in us-central1-c...
    Oct 18, 2021 12:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:45:48.035Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 18, 2021 12:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:45:56.162Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 18, 2021 12:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:45:56.192Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 18, 2021 12:46:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:46:06.408Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 18, 2021 12:46:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:46:32.350Z: Workers have started successfully.
    Oct 18, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:46:32.386Z: Workers have started successfully.
    Oct 18, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:47:00.404Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 18, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:47:00.532Z: Cleaning up.
    Oct 18, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:47:00.612Z: Stopping worker pool...
    Oct 18, 2021 12:49:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:49:26.526Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 18, 2021 12:49:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-18T00:49:26.575Z: Worker pool stopped.
    Oct 18, 2021 12:49:32 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-17_17_45_06-1251233564020434467 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): b8a4cc54-75a1-47f8-bc06-00b9f08bd089 and timestamp: 2021-10-18T00:49:32.822000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.391

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 18, 2021 12:49:33 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 43.589 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 13s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/zmdgepc42wzqa

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2557

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2557/display/redirect>

Changes:


------------------------------------------
[...truncated 340.49 KB...]
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8633430505ba229471184114133f41c0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 17, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 17, 2021 6:45:03 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 17, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 17, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 17, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 17, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 17, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 17, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 17, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 17, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 17, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 17, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 17, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 17, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 17, 2021 6:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 17, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 17, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 17, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-lUguzpOdbIqhvKY4twebC5GyWRVEFGdF9HdZahKjWp4.jar
    Oct 17, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4056923343965430316.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-B0X1LLZMnbhtJzTA6-fv34HCYim7n64rltJa-y72k7M.jar
    Oct 17, 2021 6:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 17, 2021 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 17, 2021 6:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash ae9549bbcc71ca5fb44e8a223a6b2123d5f67b55f60a8d4aa5583e7cf1745caf> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-rpVJu8xxyl-0TooiOmshI9X2e1X2Co1KpVg-fPF0XK8.pb
    Oct 17, 2021 6:45:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 17, 2021 6:45:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 17, 2021 6:45:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 17, 2021 6:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 17, 2021 6:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-17_11_45_16-3089658574741920063?project=apache-beam-testing
    Oct 17, 2021 6:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-17_11_45_16-3089658574741920063
    Oct 17, 2021 6:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-17_11_45_16-3089658574741920063
    Oct 17, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-17T18:45:19.962Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 17, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:45:26.795Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 17, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:45:27.599Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 17, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:45:27.636Z: Expanding GroupByKey operations into optimizable parts.
    Oct 17, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:45:27.663Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 17, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:45:27.737Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 17, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:45:27.764Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 17, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:45:27.797Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 17, 2021 6:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:45:28.173Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 17, 2021 6:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:45:28.239Z: Starting 5 workers in us-central1-c...
    Oct 17, 2021 6:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:45:55.242Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 17, 2021 6:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:45:58.354Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 17, 2021 6:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:45:58.373Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 17, 2021 6:46:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:46:08.572Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 17, 2021 6:46:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:46:34.664Z: Workers have started successfully.
    Oct 17, 2021 6:46:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:46:34.694Z: Workers have started successfully.
    Oct 17, 2021 6:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:47:01.765Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 17, 2021 6:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:47:01.913Z: Cleaning up.
    Oct 17, 2021 6:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:47:02.000Z: Stopping worker pool...
    Oct 17, 2021 6:49:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:49:31.669Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 17, 2021 6:49:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T18:49:31.739Z: Worker pool stopped.
    Oct 17, 2021 6:49:36 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-17_11_45_16-3089658574741920063 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): a961e5dc-9bad-41ec-a9b7-718b9f381126 and timestamp: 2021-10-17T18:49:36.846000000Z:
                     Metric:                    Value:
                   read_time                     6.923
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 17, 2021 6:49:37 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 39.023 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 14s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/2vc7fnbuh3fsq

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2556

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2556/display/redirect>

Changes:


------------------------------------------
[...truncated 340.72 KB...]
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8633430505ba229471184114133f41c0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 17, 2021 12:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 17, 2021 12:44:55 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 17, 2021 12:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 17, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 17, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 17, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 17, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 17, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 17, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 17, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 17, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 17, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 17, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 17, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 17, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 17, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 17, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 17, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-lUguzpOdbIqhvKY4twebC5GyWRVEFGdF9HdZahKjWp4.jar
    Oct 17, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test715673712848259339.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-w2RvxqIYtUlLz5K8azoUcEwPo_LrDwBUyb6Tgb8CWb0.jar
    Oct 17, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.rnorth.visible-assertions/visible-assertions/2.1.2/20d31a578030ec8e941888537267d3123c2ad1c1/visible-assertions-2.1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/visible-assertions-2.1.2-RQSulosjfNzcto_1sHqmOr5JkvkHp3w9YSCqm5BBQBw.jar
    Oct 17, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.java.dev.jna/jna/5.5.0/e0845217c4907822403912ad6828d8e0b256208/jna-5.5.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jna-5.5.0-swj66_5O1AnehBDgpjLRZLISawNfbqz_lo05CMr7TZ4.jar
    Oct 17, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.rnorth.duct-tape/duct-tape/1.0.8/92edc22a9ab2f3e17c9bf700aaee377d50e8b530/duct-tape-1.0.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/duct-tape-1.0.8-Mc7xLd7JedH4bXz3CMQaF9pSPQXGhf1mQunQsq3bckA.jar
    Oct 17, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.java.dev.jna/jna-platform/4.0.0/deb6bf66918989b50209b8c9aaf3b2561af7f011/jna-platform-4.0.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jna-platform-4.0.0-B21i7Yfna9yzdQ_-gKpJXuIRK9chJmOVTWVH9IJEkuk.jar
    Oct 17, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 245 files cached, 5 files newly uploaded in 0 seconds
    Oct 17, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 17, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 0ea9eabf35e01b6471399348b815299966353628021fb73a6bf6f86584a4375a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-DqnqvzXgG2RxOZNIuBUpmWY1NigCH7c6a_b4ZYSkN1o.pb
    Oct 17, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 17, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 17, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 17, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 17, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-17_05_45_09-3941338970109271771?project=apache-beam-testing
    Oct 17, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-17_05_45_09-3941338970109271771
    Oct 17, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-17_05_45_09-3941338970109271771
    Oct 17, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-17T12:45:12.127Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 17, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:45:16.317Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 17, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:45:17.002Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 17, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:45:17.034Z: Expanding GroupByKey operations into optimizable parts.
    Oct 17, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:45:17.066Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 17, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:45:17.137Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 17, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:45:17.170Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 17, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:45:17.204Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 17, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:45:17.545Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 17, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:45:17.633Z: Starting 5 workers in us-central1-a...
    Oct 17, 2021 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:45:51.173Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 17, 2021 12:45:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:45:57.847Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 17, 2021 12:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:46:23.375Z: Workers have started successfully.
    Oct 17, 2021 12:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:46:23.410Z: Workers have started successfully.
    Oct 17, 2021 12:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:46:51.020Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 17, 2021 12:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:46:51.164Z: Cleaning up.
    Oct 17, 2021 12:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:46:51.234Z: Stopping worker pool...
    Oct 17, 2021 12:49:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:49:20.120Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 17, 2021 12:49:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T12:49:20.160Z: Worker pool stopped.
    Oct 17, 2021 12:49:26 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-17_05_45_09-3941338970109271771 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f94ac8da-abb2-444d-b39c-a5140cf80fd8 and timestamp: 2021-10-17T12:49:26.507000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.328

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 17, 2021 12:49:26 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 36.046 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 7s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/y7ehflaolprhw

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2555

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2555/display/redirect>

Changes:


------------------------------------------
[...truncated 337.72 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8633430505ba229471184114133f41c0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 17, 2021 6:44:50 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 17, 2021 6:44:51 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 17, 2021 6:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 17, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 17, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 17, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 17, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 17, 2021 6:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 17, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 17, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-lUguzpOdbIqhvKY4twebC5GyWRVEFGdF9HdZahKjWp4.jar
    Oct 17, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2070319809089714412.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-EazQ446EH9zndrKGp5LYpQ9Vuky9QPFrGxV328IP3Gs.jar
    Oct 17, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 17, 2021 6:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 17, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 20a042a642470c1da57d998960ad9220f63de8f120f861eb94833e4ce6cbb934> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-IKBCpkJHDB2lfZmJYK2SIPY96PEg-GHrlIM-TObLuTQ.pb
    Oct 17, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 17, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 17, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 17, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 17, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-16_23_45_03-4315291018268110247?project=apache-beam-testing
    Oct 17, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-16_23_45_03-4315291018268110247
    Oct 17, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-16_23_45_03-4315291018268110247
    Oct 17, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-17T06:45:06.685Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 17, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:45:11.657Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 17, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:45:12.336Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 17, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:45:12.375Z: Expanding GroupByKey operations into optimizable parts.
    Oct 17, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:45:12.402Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 17, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:45:12.474Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 17, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:45:12.504Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 17, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:45:12.535Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 17, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:45:12.853Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 17, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:45:12.929Z: Starting 5 workers in us-central1-a...
    Oct 17, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:45:33.166Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 17, 2021 6:46:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:46:02.299Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 17, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:46:28.278Z: Workers have started successfully.
    Oct 17, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:46:28.296Z: Workers have started successfully.
    Oct 17, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:46:54.053Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 17, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:46:54.175Z: Cleaning up.
    Oct 17, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:46:54.236Z: Stopping worker pool...
    Oct 17, 2021 6:49:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:49:21.033Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 17, 2021 6:49:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T06:49:21.096Z: Worker pool stopped.
    Oct 17, 2021 6:49:27 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-16_23_45_03-4315291018268110247 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d7b200d2-f66a-4d86-9fa6-28e8895eee2a and timestamp: 2021-10-17T06:49:27.094000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.561

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 17, 2021 6:49:27 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 40.391 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 10s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/3h5xigpexa6ps

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2554

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2554/display/redirect>

Changes:


------------------------------------------
[...truncated 338.16 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':',5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8633430505ba229471184114133f41c0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 17, 2021 12:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 17, 2021 12:44:55 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 17, 2021 12:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 17, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 17, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 17, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 17, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 17, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 17, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 17, 2021 12:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 17, 2021 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 17, 2021 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-lUguzpOdbIqhvKY4twebC5GyWRVEFGdF9HdZahKjWp4.jar
    Oct 17, 2021 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5342571152511254611.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-x-euN8NcAy-N2QJs3-2aWS3Ke2dupAD1njFMfoFzi1g.jar
    Oct 17, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 8 seconds
    Oct 17, 2021 12:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 17, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104888 bytes, hash ac1e90e35cace738d00fab63547968e56e67b16246221a414a07bfdbf071cacd> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-rB6Q41ys5zjQD6tjVHlo5W5nsWJGIhpBSge_2_Bxys0.pb
    Oct 17, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 17, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 17, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 17, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 17, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-16_17_45_16-15547595533803983167?project=apache-beam-testing
    Oct 17, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-16_17_45_16-15547595533803983167
    Oct 17, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-16_17_45_16-15547595533803983167
    Oct 17, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-17T00:45:19.809Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 17, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:45:26.746Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 17, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:45:27.417Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 17, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:45:27.445Z: Expanding GroupByKey operations into optimizable parts.
    Oct 17, 2021 12:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:45:27.472Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 17, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:45:27.534Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 17, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:45:27.559Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 17, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:45:27.593Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 17, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:45:27.915Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 17, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:45:27.988Z: Starting 5 workers in us-central1-c...
    Oct 17, 2021 12:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:45:50.914Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 17, 2021 12:46:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:46:08.577Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 17, 2021 12:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:46:33.476Z: Workers have started successfully.
    Oct 17, 2021 12:46:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:46:33.508Z: Workers have started successfully.
    Oct 17, 2021 12:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:47:02.998Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 17, 2021 12:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:47:03.165Z: Cleaning up.
    Oct 17, 2021 12:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:47:03.233Z: Stopping worker pool...
    Oct 17, 2021 12:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:49:23.569Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 17, 2021 12:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-17T00:49:23.607Z: Worker pool stopped.
    Oct 17, 2021 12:49:31 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-16_17_45_16-15547595533803983167 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 157f5288-fe04-4469-97ee-e73b3a8a5c31 and timestamp: 2021-10-17T00:49:31.366000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.031

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 17, 2021 12:49:31 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 41.735 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 12s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ibj2j4nen6gfk

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2553

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2553/display/redirect>

Changes:


------------------------------------------
[...truncated 338.07 KB...]
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8633430505ba229471184114133f41c0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 16, 2021 6:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 16, 2021 6:44:52 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 16, 2021 6:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 16, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 16, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 16, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 16, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 16, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 16, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 16, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 16, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 16, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 16, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 16, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 16, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 16, 2021 6:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 16, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 16, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-lUguzpOdbIqhvKY4twebC5GyWRVEFGdF9HdZahKjWp4.jar
    Oct 16, 2021 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3745694269218418822.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-elpz0i77A0X_49cCF8gW-d9GwsMCZiSN5Yx8c11mNvE.jar
    Oct 16, 2021 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.alibaba/fastjson/1.2.69/6cb063f1d527ff65bdbb9ea74888a5ffc3f92197/fastjson-1.2.69.jar to gs://temp-storage-for-perf-tests/loadtests/staging/fastjson-1.2.69-KniRdEoDeOAJmpuB3b7U0uGn6GMvb4b4_8-jg-UeScg.jar
    Oct 16, 2021 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Oct 16, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 16, 2021 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash c558ddfb8756d38e333978451b0b2ecd14c519c27619947106499cca36a21012> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-xVjd-4dW044zOXhFGwsuzRTFGcJ2GZRxBkmcyjaiEBI.pb
    Oct 16, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 16, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 16, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 16, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 16, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-16_11_45_05-962833022890362814?project=apache-beam-testing
    Oct 16, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-16_11_45_05-962833022890362814
    Oct 16, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-16_11_45_05-962833022890362814
    Oct 16, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-16T18:45:08.362Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 16, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:45:15.150Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 16, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:45:15.835Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 16, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:45:15.875Z: Expanding GroupByKey operations into optimizable parts.
    Oct 16, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:45:15.902Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 16, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:45:15.970Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 16, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:45:15.997Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 16, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:45:16.025Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 16, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:45:16.372Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 16, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:45:16.446Z: Starting 5 workers in us-central1-a...
    Oct 16, 2021 6:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:45:45.134Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 16, 2021 6:46:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:46:00.697Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 16, 2021 6:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:46:27.241Z: Workers have started successfully.
    Oct 16, 2021 6:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:46:27.270Z: Workers have started successfully.
    Oct 16, 2021 6:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:46:55.486Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 16, 2021 6:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:46:55.617Z: Cleaning up.
    Oct 16, 2021 6:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:46:55.686Z: Stopping worker pool...
    Oct 16, 2021 6:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:49:16.256Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 16, 2021 6:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T18:49:16.307Z: Worker pool stopped.
    Oct 16, 2021 6:49:22 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-16_11_45_05-962833022890362814 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 63b0efc2-578d-4b47-9b4e-abae83586a70 and timestamp: 2021-10-16T18:49:22.663000000Z:
                     Metric:                    Value:
                   read_time                     7.786
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 16, 2021 6:49:23 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 34.471 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 4s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/5pjmq2b7bkqv2

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2552

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2552/display/redirect>

Changes:


------------------------------------------
[...truncated 338.34 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8633430505ba229471184114133f41c0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 16, 2021 12:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 16, 2021 12:44:56 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 16, 2021 12:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 16, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 16, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 16, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 16, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 16, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 16, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 16, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 16, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 16, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 16, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 16, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-lUguzpOdbIqhvKY4twebC5GyWRVEFGdF9HdZahKjWp4.jar
    Oct 16, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1995732283690668423.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-UDPlafVzdYkMUvFPq8_F3akXpg81-C-VwyxJtMu2Y3c.jar
    Oct 16, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 16, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 16, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash daecd3ec334425a4d48792ac042cf0b440e863564eff68683f7c36b56eae6fbc> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-2uzT7DNEJaTUh5KsBCzwtEDoY1ZO_2hoP3w2tW6ub7w.pb
    Oct 16, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 16, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 16, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 16, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 16, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-16_05_45_11-2778178894381701195?project=apache-beam-testing
    Oct 16, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-16_05_45_11-2778178894381701195
    Oct 16, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-16_05_45_11-2778178894381701195
    Oct 16, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-16T12:45:14.504Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:45:19.246Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:45:20.029Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:45:20.064Z: Expanding GroupByKey operations into optimizable parts.
    Oct 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:45:20.099Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:45:20.162Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:45:20.190Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:45:20.237Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:45:20.537Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 16, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:45:20.615Z: Starting 5 workers in us-central1-a...
    Oct 16, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:45:28.245Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 16, 2021 12:46:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:46:05.544Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Oct 16, 2021 12:46:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:46:05.569Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Oct 16, 2021 12:46:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:46:26.003Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 16, 2021 12:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:46:30.809Z: Workers have started successfully.
    Oct 16, 2021 12:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:46:30.838Z: Workers have started successfully.
    Oct 16, 2021 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:47:00.047Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 16, 2021 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:47:00.184Z: Cleaning up.
    Oct 16, 2021 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:47:00.256Z: Stopping worker pool...
    Oct 16, 2021 12:49:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:49:17.920Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 16, 2021 12:49:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T12:49:17.960Z: Worker pool stopped.
    Oct 16, 2021 12:49:24 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-16_05_45_11-2778178894381701195 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 3fcaf61d-caa5-473f-aa3d-d41d3e172c40 and timestamp: 2021-10-16T12:49:25.011000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.134

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 16, 2021 12:49:25 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 33.811 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 8s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/uvf44a3eq2epm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2551

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2551/display/redirect>

Changes:


------------------------------------------
[...truncated 348.03 KB...]
Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8633430505ba229471184114133f41c0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 16, 2021 6:45:24 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 16, 2021 6:45:25 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 16, 2021 6:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 16, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 6:45:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 16, 2021 6:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 6:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 6:45:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 16, 2021 6:45:29 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 6:45:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 16, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 16, 2021 6:45:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 16, 2021 6:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 16, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 16, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-lUguzpOdbIqhvKY4twebC5GyWRVEFGdF9HdZahKjWp4.jar
    Oct 16, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4448712973903484506.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-0iFCl6Isq1fYgFcb7h8hM8DXlMBtc_28WF30CGjHTEA.jar
    Oct 16, 2021 6:45:36 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 16, 2021 6:45:36 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 16, 2021 6:45:36 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 9a2ab7d2a492dc3f5bc88924fed98a4f610f11ea82517b5974b2eb6673ae8e8a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-miq30qSS3D9byIkk_tmKT2EPEeqCUXtZdLLrZnOujoo.pb
    Oct 16, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 16, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 16, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 16, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 16, 2021 6:45:39 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-15_23_45_39-2842250562811226881?project=apache-beam-testing
    Oct 16, 2021 6:45:39 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-15_23_45_39-2842250562811226881
    Oct 16, 2021 6:45:39 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-15_23_45_39-2842250562811226881
    Oct 16, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-16T06:45:42.475Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 16, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:45:48.360Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 16, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:45:49.189Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 16, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:45:49.245Z: Expanding GroupByKey operations into optimizable parts.
    Oct 16, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:45:49.275Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 16, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:45:49.345Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 16, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:45:49.373Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 16, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:45:49.403Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 16, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:45:49.738Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 16, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:45:49.817Z: Starting 5 workers in us-central1-a...
    Oct 16, 2021 6:46:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:46:16.708Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 16, 2021 6:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:46:35.185Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Oct 16, 2021 6:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:46:35.213Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Oct 16, 2021 6:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:46:55.740Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 16, 2021 6:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:47:01.500Z: Workers have started successfully.
    Oct 16, 2021 6:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:47:01.553Z: Workers have started successfully.
    Oct 16, 2021 6:47:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:47:30.335Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 16, 2021 6:47:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:47:30.477Z: Cleaning up.
    Oct 16, 2021 6:47:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:47:30.562Z: Stopping worker pool...
    Oct 16, 2021 6:50:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:50:02.269Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 16, 2021 6:50:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T06:50:02.311Z: Worker pool stopped.
    Oct 16, 2021 6:50:09 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-15_23_45_39-2842250562811226881 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 037ac73b-0bb0-4937-a07a-f92aba67f33d and timestamp: 2021-10-16T06:50:09.450000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.967

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 16, 2021 6:50:10 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.037 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 49.342 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 50s
152 actionable tasks: 101 executed, 51 from cache

Publishing build scan...
https://gradle.com/s/6f3ydepi4safe

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2550

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2550/display/redirect?page=changes>

Changes:

[noreply] [BEAM-11480] Use snippets for DataFrame examples (#15600)

[noreply] Allow multiple Python worker processe to share the same VM. (#15642)

[noreply] [BEAM-12564] Implement Series.hasnans (#15729)

[noreply] Minor: Fix frames_test.py equality check for non-frame outputs (#15734)

[noreply] [BEAM-12769] Adds integration tests for Java Class Lookup based

[Brian Hulette] Address BEAM-4028 cleanup TODOs

[Brian Hulette] Bump Dataflow container to beam-master-20211015

[noreply] [BEAM-13052] Restructure pubsublite folder to move non-user interface


------------------------------------------
[...truncated 355.41 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 4 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8633430505ba229471184114133f41c0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 4'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 4'
Successfully started process 'Gradle Test Executor 4'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 16, 2021 12:50:03 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 16, 2021 12:50:03 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 16, 2021 12:50:04 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 16, 2021 12:50:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 12:50:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 12:50:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 16, 2021 12:50:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 12:50:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 12:50:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 16, 2021 12:50:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 16, 2021 12:50:10 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 16, 2021 12:50:10 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 16, 2021 12:50:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 16, 2021 12:50:14 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 16, 2021 12:50:14 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-lUguzpOdbIqhvKY4twebC5GyWRVEFGdF9HdZahKjWp4.jar
    Oct 16, 2021 12:50:15 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3716925047183440446.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-cdYIGqTgXqb6LD7pgnX6OW9VayY0THyo08TuZAMg3eo.jar
    Oct 16, 2021 12:50:15 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 16, 2021 12:50:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 16, 2021 12:50:15 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash a75cf6738b1d70d80545e0d19992b1a0320b9c15e388769840b23de10c9a5533> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-p1z2c4sdcNgFReDRmZKxoDILnBXjiHaYQLI94QyaVTM.pb
    Oct 16, 2021 12:50:18 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 16, 2021 12:50:18 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 16, 2021 12:50:18 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 16, 2021 12:50:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 16, 2021 12:50:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-15_17_50_18-4931514561962804645?project=apache-beam-testing
    Oct 16, 2021 12:50:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-15_17_50_18-4931514561962804645
    Oct 16, 2021 12:50:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-15_17_50_18-4931514561962804645
    Oct 16, 2021 12:50:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-16T00:50:21.487Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 16, 2021 12:50:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:50:28.584Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 16, 2021 12:50:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:50:29.334Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 16, 2021 12:50:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:50:29.489Z: Expanding GroupByKey operations into optimizable parts.
    Oct 16, 2021 12:50:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:50:29.517Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 16, 2021 12:50:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:50:29.581Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 16, 2021 12:50:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:50:29.612Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 16, 2021 12:50:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:50:29.643Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 16, 2021 12:50:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:50:29.988Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 16, 2021 12:50:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:50:30.074Z: Starting 5 workers in us-central1-c...
    Oct 16, 2021 12:50:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:50:40.798Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 16, 2021 12:51:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:51:13.298Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 16, 2021 12:51:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:51:38.862Z: Workers have started successfully.
    Oct 16, 2021 12:51:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:51:38.894Z: Workers have started successfully.
    Oct 16, 2021 12:52:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:52:06.517Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 16, 2021 12:52:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:52:06.648Z: Cleaning up.
    Oct 16, 2021 12:52:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:52:06.719Z: Stopping worker pool...
    Oct 16, 2021 12:54:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:54:33.237Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 16, 2021 12:54:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-16T00:54:33.280Z: Worker pool stopped.
    Oct 16, 2021 12:54:39 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-15_17_50_18-4931514561962804645 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 5a5ea29e-c704-4dd0-bc9c-3ae759c0f777 and timestamp: 2021-10-16T00:54:39.092000000Z:
                     Metric:                    Value:
                   read_time                     8.172
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 16, 2021 12:54:39 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 4 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 40.146 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 10m 19s
152 actionable tasks: 108 executed, 44 from cache

Publishing build scan...
https://gradle.com/s/mcicesrxiwb3o

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2549

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2549/display/redirect?page=changes>

Changes:

[noreply] Corrected Join Example

[noreply] Minor: Add more links to DataFrame API documentation (#15661)


------------------------------------------
[...truncated 337.49 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c8e4b020b99ccb0e041b11d4c137bdf3
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 15, 2021 6:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 15, 2021 6:44:53 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 15, 2021 6:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 15, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 15, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 15, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 15, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 15, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 15, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 15, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 15, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 15, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 15, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 15, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 15, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 15, 2021 6:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 15, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 15, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 15, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8870401464592203205.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-TubJEJo0JReG3C3PG7Umi1ifHANYnYUdzQgNPQ4lCaQ.jar
    Oct 15, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.beam/beam-vendor-calcite-1_26_0/0.1/5fae4e97a2d8739462bd1572e48d01228766b6ef/beam-vendor-calcite-1_26_0-0.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/beam-vendor-calcite-1_26_0-0.1-pYZ7esxRWyhKmBqBdfrpnxvg8woyykTvGbaCvLtyRyA.jar
    Oct 15, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 1 seconds
    Oct 15, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 15, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 51f7cf324c7ed91c38a3a3da5110ddd50da97e5a1cafd1c8407dc2ec9ff6a9fd> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-UffPMkx-2Rw4o6PaURDd1Q2pflocr9HIQH3C7J_2qf0.pb
    Oct 15, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 15, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 15, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 15, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 15, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-15_11_45_07-14728167554486098967?project=apache-beam-testing
    Oct 15, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-15_11_45_07-14728167554486098967
    Oct 15, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-15_11_45_07-14728167554486098967
    Oct 15, 2021 6:45:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-15T18:45:10.475Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 15, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:45:17.324Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 15, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:45:18.409Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 15, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:45:18.458Z: Expanding GroupByKey operations into optimizable parts.
    Oct 15, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:45:18.508Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 15, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:45:18.590Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 15, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:45:18.646Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 15, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:45:18.782Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 15, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:45:19.208Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 15, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:45:19.275Z: Starting 5 workers in us-central1-a...
    Oct 15, 2021 6:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:45:45.501Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 15, 2021 6:46:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:45:59.183Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 15, 2021 6:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:46:24.829Z: Workers have started successfully.
    Oct 15, 2021 6:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:46:24.859Z: Workers have started successfully.
    Oct 15, 2021 6:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:46:52.705Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 15, 2021 6:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:46:52.853Z: Cleaning up.
    Oct 15, 2021 6:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:46:52.928Z: Stopping worker pool...
    Oct 15, 2021 6:49:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:49:17.343Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 15, 2021 6:49:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T18:49:17.378Z: Worker pool stopped.
    Oct 15, 2021 6:49:22 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-15_11_45_07-14728167554486098967 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 87cc2f5c-f737-4dfe-8ca5-fb5afe83a34c and timestamp: 2021-10-15T18:49:22.869000000Z:
                     Metric:                    Value:
                   read_time                     8.296
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 15, 2021 6:49:23 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.037 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 34.334 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 4s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/qy6z4idv3udww

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2548

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2548/display/redirect>

Changes:


------------------------------------------
[...truncated 345.15 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c8e4b020b99ccb0e041b11d4c137bdf3
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 15, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 15, 2021 12:45:05 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 15, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 15, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 15, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 15, 2021 12:45:10 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 15, 2021 12:45:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 15, 2021 12:45:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 12:45:10 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 15, 2021 12:45:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 15, 2021 12:45:10 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 12:45:10 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 15, 2021 12:45:10 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 15, 2021 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 12:45:11 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 15, 2021 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 12:45:11 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 15, 2021 12:45:11 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 15, 2021 12:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 15, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 15, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 15, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 15, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7237647777722747071.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-wPqyRNsshdRZwos9sMNmncI6BbS8WkvZnjJyrNhLYjA.jar
    Oct 15, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 15, 2021 12:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 15, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 7bd831eba26cff554bfc43ab0f670659aa558e9b19b07e046d85acd4da929b14> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-e9gx66Js_1VL_EOrD2cGWapVjpsZsH4EbYWs1NqSmxQ.pb
    Oct 15, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 15, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 15, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 15, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 15, 2021 12:45:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-15_05_45_17-4262620966407755275?project=apache-beam-testing
    Oct 15, 2021 12:45:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-15_05_45_17-4262620966407755275
    Oct 15, 2021 12:45:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-15_05_45_17-4262620966407755275
    Oct 15, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-15T12:45:20.864Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 15, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:45:25.686Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 15, 2021 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:45:26.531Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 15, 2021 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:45:26.568Z: Expanding GroupByKey operations into optimizable parts.
    Oct 15, 2021 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:45:26.612Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 15, 2021 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:45:26.671Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 15, 2021 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:45:26.698Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 15, 2021 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:45:26.730Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 15, 2021 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:45:27.054Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 15, 2021 12:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:45:27.122Z: Starting 5 workers in us-central1-a...
    Oct 15, 2021 12:45:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:45:54.570Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 15, 2021 12:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:46:23.939Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 15, 2021 12:46:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:46:53.077Z: Workers have started successfully.
    Oct 15, 2021 12:46:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:46:53.110Z: Workers have started successfully.
    Oct 15, 2021 12:47:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:47:20.309Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 15, 2021 12:47:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:47:20.442Z: Cleaning up.
    Oct 15, 2021 12:47:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:47:20.518Z: Stopping worker pool...
    Oct 15, 2021 12:49:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:49:46.195Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 15, 2021 12:49:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T12:49:46.235Z: Worker pool stopped.
    Oct 15, 2021 12:49:54 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-15_05_45_17-4262620966407755275 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): ebe1b329-ee34-4373-a504-d2c48c3425cc and timestamp: 2021-10-15T12:49:54.054000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.892

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 15, 2021 12:49:54 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 53.163 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 35s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/gp7wsbd5awr4k

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2547

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2547/display/redirect?page=changes>

Changes:

[vachan] Remove unnecessary ERROR logging.


------------------------------------------
[...truncated 341.49 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 15, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 15, 2021 6:45:02 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 15, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 15, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 15, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 15, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1413020570]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1220110338]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 15, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 15, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 15, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 15, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 15, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 15, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gradle-worker-qOxdI4KUjo3GTP9rF7KeqZGuhs9XeyH-VdOKR6HUq6Q.jar
    Oct 15, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test863260845737003274.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-rHjSdeQ3ukyVfnPA3YR3A5diMxTM45lJlgL7egZzO3k.jar
    Oct 15, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Oct 15, 2021 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 15, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104889 bytes, hash f542bb940cfd8a6535e46033e4db5e57478c42f7111b63d9e1207cb923256128> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-9UK7lAz9imU15GAz5NteV0eMQvcRG2PZ4SB8uSMlYSg.pb
    Oct 15, 2021 6:45:17 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 15, 2021 6:45:17 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 15, 2021 6:45:17 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 15, 2021 6:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 15, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-14_23_45_17-8007673697252368974?project=apache-beam-testing
    Oct 15, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-14_23_45_17-8007673697252368974
    Oct 15, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-14_23_45_17-8007673697252368974
    Oct 15, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-15T06:45:20.827Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 15, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:45:27.826Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 15, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:45:28.498Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 15, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:45:28.544Z: Expanding GroupByKey operations into optimizable parts.
    Oct 15, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:45:28.578Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 15, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:45:28.632Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 15, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:45:28.663Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 15, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:45:28.696Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 15, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:45:29.031Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 15, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:45:29.224Z: Starting 5 workers in us-central1-a...
    Oct 15, 2021 6:46:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:46:00.783Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 15, 2021 6:46:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:46:08.656Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 15, 2021 6:46:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:46:08.687Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 15, 2021 6:46:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:46:18.962Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 15, 2021 6:46:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:46:43.802Z: Workers have started successfully.
    Oct 15, 2021 6:46:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:46:43.846Z: Workers have started successfully.
    Oct 15, 2021 6:47:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:47:11.189Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 15, 2021 6:47:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:47:11.317Z: Cleaning up.
    Oct 15, 2021 6:47:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:47:11.389Z: Stopping worker pool...
    Oct 15, 2021 6:49:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:49:37.079Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 15, 2021 6:49:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T06:49:37.150Z: Worker pool stopped.
    Oct 15, 2021 6:49:44 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-14_23_45_17-8007673697252368974 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 5691b6bb-cb8c-479a-89ad-0c52e772169f and timestamp: 2021-10-15T06:49:44.088000000Z:
                     Metric:                    Value:
                   read_time                     7.498
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 15, 2021 6:49:44 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 48.455 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 23s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/wr3y2o2zps43e

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2546

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2546/display/redirect?page=changes>

Changes:

[Robert Bradshaw] [BEAM-13053] Avoid runner v2 when streaming engine explicitly disabled.

[noreply] [Go SDK] Small typo fixes. (#15725)

[noreply] [BEAM-13038] Migrate to use a context object within

[noreply] [BEAM-12135] Avoid windowing overhead in Spark global GBK. (#15637)

[noreply] [BEAM-12907] Run DataFrame API tests with multiple pandas versions


------------------------------------------
[...truncated 376.22 KB...]
    Oct 15, 2021 12:52:49 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 15, 2021 12:52:50 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 15, 2021 12:52:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 12:52:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 12:52:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 15, 2021 12:52:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 12:52:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 12:52:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 15, 2021 12:52:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 15, 2021 12:52:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 15, 2021 12:52:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 12:52:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 15, 2021 12:52:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 15, 2021 12:52:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 12:52:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 15, 2021 12:52:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1841175909]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 15, 2021 12:52:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 12:52:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 12:52:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 15, 2021 12:52:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 15, 2021 12:52:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 15, 2021 12:52:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 15, 2021 12:52:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 15, 2021 12:52:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 15, 2021 12:52:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 15, 2021 12:53:04 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 15, 2021 12:53:04 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 15, 2021 12:53:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3248217378752106573.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-yUaFrTqZh9JOvTHMAfvwkvNh8AMPX1PA5_Yi9fgWuWo.jar
    Oct 15, 2021 12:53:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.testcontainers/kafka/1.15.1/ccd374ca7e5d8b06fb31ecffeb03abc42feada84/kafka-1.15.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/kafka-1.15.1-Z4qZ3jdWXDvdB7xD9VivkdVQiRZWNWUhPSIANVCSZDA.jar
    Oct 15, 2021 12:53:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-api/3.2.7/81408fc988c229ea11354fee9902c47842343f04/docker-java-api-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-api-3.2.7-AwtXDacySf3gUHoixVjjNqW0wUI0YcGJjMO-N7kxBho.jar
    Oct 15, 2021 12:53:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/de.flapdoodle.embed/de.flapdoodle.embed.mongo/2.2.0/781d14f4e3d9eeb0b4c3e00a4ec165a04b3dc5c4/de.flapdoodle.embed.mongo-2.2.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/de.flapdoodle.embed.mongo-2.2.0-vNy3lJC0jW9u4Cy1AHsqSbjRUqOTX9ycpEmHkht7vvk.jar
    Oct 15, 2021 12:53:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/de.flapdoodle.embed/de.flapdoodle.embed.process/2.1.2/986b38302fa10018d5baccee7f655c31ee9afe5b/de.flapdoodle.embed.process-2.1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/de.flapdoodle.embed.process-2.1.2-OasY7D5KRAimcZcWcjFwgi8Qb4B-iff1FfrVeNSih6A.jar
    Oct 15, 2021 12:53:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.janino/janino/3.0.11/e699e368095379ba0402ea1780a87fcaea16e68f/janino-3.0.11.jar to gs://temp-storage-for-perf-tests/loadtests/staging/janino-3.0.11-kje3HSMpGA5ZIQ6aqhAO4xNFTvCuWIYIx1yxkxlZG-E.jar
    Oct 15, 2021 12:53:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.testcontainers/testcontainers/1.15.1/91e6dfab8f141f77c6a0dd147a94bd186993a22c/testcontainers-1.15.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/testcontainers-1.15.1-_fMrLpRXSowMnAISRc73bMk5UqrXNnsv6vkPXOZRXu0.jar
    Oct 15, 2021 12:53:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-transport-zerodep/3.2.7/d74cb05fabc57e9ec441dc39d0bc58ad649fad3d/docker-java-transport-zerodep-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-transport-zerodep-3.2.7-O2a9rUb-899yDZMu4E8j9ItFOkNoggqbLP2xaLmOwJY.jar
    Oct 15, 2021 12:53:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.janino/commons-compiler/3.0.11/f2a6ec7fbc929c9fc87ff8bb486c0574951c5b09/commons-compiler-3.0.11.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-compiler-3.0.11-DxpPXyZccBoxkzJErnBF_O8YtPpZUEF-Je5wvlDd2s8.jar
    Oct 15, 2021 12:53:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-csv/1.8/37ca9a9aa2d4be2599e55506a6d3170dd7a3df4/commons-csv-1.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-csv-1.8-qL1WZS7UZo2dWjOZSuUvWbnjnI6w68tmhOaK7udXmmE.jar
    Oct 15, 2021 12:53:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-transport/3.2.7/315903a129f530422747efc163dd255f0fa2555e/docker-java-transport-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-transport-3.2.7-ynNXlSXhDSyiLCXkA4c9E9_HGeKB7-wbpEzmswFrI-A.jar
    Oct 15, 2021 12:53:12 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 239 files cached, 11 files newly uploaded in 8 seconds
    Oct 15, 2021 12:53:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 15, 2021 12:53:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 964d17d19ca3565b6aae1d506eac0fead2fe0c5f08edcbbfd55dd81cbd175fad> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-lk0X0ZyjVltqrh1QbqwP6tL-DF8I7cu_1V3YHL0XX60.pb
    Oct 15, 2021 12:53:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 15, 2021 12:53:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 15, 2021 12:53:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 15, 2021 12:53:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 15, 2021 12:53:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-14_17_53_17-17866899255543482984?project=apache-beam-testing
    Oct 15, 2021 12:53:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-14_17_53_17-17866899255543482984
    Oct 15, 2021 12:53:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-14_17_53_17-17866899255543482984
    Oct 15, 2021 12:53:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-15T00:53:20.184Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 15, 2021 12:53:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:53:26.758Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 15, 2021 12:53:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:53:27.504Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 15, 2021 12:53:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:53:27.557Z: Expanding GroupByKey operations into optimizable parts.
    Oct 15, 2021 12:53:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:53:27.581Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 15, 2021 12:53:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:53:27.660Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 15, 2021 12:53:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:53:27.686Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 15, 2021 12:53:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:53:27.711Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 15, 2021 12:53:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:53:28.019Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 15, 2021 12:53:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:53:28.091Z: Starting 5 workers in us-central1-c...
    Oct 15, 2021 12:53:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:53:34.531Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 15, 2021 12:54:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:53:58.724Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 15, 2021 12:54:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:53:58.754Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 15, 2021 12:54:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:54:09.038Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 15, 2021 12:54:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:54:32.853Z: Workers have started successfully.
    Oct 15, 2021 12:54:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:54:32.888Z: Workers have started successfully.
    Oct 15, 2021 12:55:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:55:03.480Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 15, 2021 12:55:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:55:03.637Z: Cleaning up.
    Oct 15, 2021 12:55:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:55:03.705Z: Stopping worker pool...
    Oct 15, 2021 12:57:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:57:18.655Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 15, 2021 12:57:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-15T00:57:18.702Z: Worker pool stopped.
    Oct 15, 2021 12:57:24 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-14_17_53_17-17866899255543482984 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c3f5e145-79ee-4892-9a8d-370e05968926 and timestamp: 2021-10-15T00:57:24.121000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.211

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 15, 2021 12:57:24 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 10 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.039 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 41.691 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 12m 47s
152 actionable tasks: 118 executed, 34 from cache

Publishing build scan...
https://gradle.com/s/z36zgcb4p22ci

Stopped 9 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2545

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2545/display/redirect?page=changes>

Changes:

[ilya.kozyrev] Add java executor to compile java code in created file system

[zyichi] Use daemon thread to report lull operation and move code to

[noreply] Merge pull request #15722 from [BEAM-13022] [Playground] Run code on the

[Luke Cwik] [BEAM-12640] Skip checkerframework on generated code/test libraries for

[Luke Cwik] [BEAM-13015] Add tiny bundle processing benchmark for Java SDK harness.

[noreply] [BEAM-12922] Test MapState putIfAbsent with no read (#15704)


------------------------------------------
[...truncated 388.37 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) started.
Gradle Test Executor 12 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c78b4aae5a459d999f1acebc29b3fde3
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 12'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 12'
Successfully started process 'Gradle Test Executor 12'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 14, 2021 6:57:54 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 14, 2021 6:57:55 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 14, 2021 6:57:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 14, 2021 6:57:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 6:57:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 6:57:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 14, 2021 6:57:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 6:57:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 6:57:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 14, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 14, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 14, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 14, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 14, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 14, 2021 6:58:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 14, 2021 6:58:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 6:58:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 6:58:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 14, 2021 6:58:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 6:58:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 6:58:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 14, 2021 6:58:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 14, 2021 6:58:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 14, 2021 6:58:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 14, 2021 6:58:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 14, 2021 6:58:04 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 14, 2021 6:58:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1382602716978478729.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-aHvARrbnqwZA9OApVdCw4aJcpkgJqhq69vc7NrAdsss.jar
    Oct 14, 2021 6:58:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 14, 2021 6:58:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 14, 2021 6:58:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 279a1c0e5056401ef0d809adceeedc89d516ee399d54ea9d929664e464986c51> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-J5ocDlBWQB7w2Amtzu7cidUW7jmdVOqdkpZk5GSYbFE.pb
    Oct 14, 2021 6:58:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 14, 2021 6:58:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 14, 2021 6:58:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 14, 2021 6:58:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 14, 2021 6:58:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-14_11_58_08-9255479186759955978?project=apache-beam-testing
    Oct 14, 2021 6:58:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-14_11_58_08-9255479186759955978
    Oct 14, 2021 6:58:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-14_11_58_08-9255479186759955978
    Oct 14, 2021 6:58:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-14T18:58:11.167Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 14, 2021 6:58:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:58:15.337Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 14, 2021 6:58:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:58:15.969Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 14, 2021 6:58:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:58:16.012Z: Expanding GroupByKey operations into optimizable parts.
    Oct 14, 2021 6:58:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:58:16.041Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 14, 2021 6:58:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:58:16.105Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 14, 2021 6:58:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:58:16.128Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 14, 2021 6:58:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:58:16.165Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 14, 2021 6:58:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:58:16.461Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 14, 2021 6:58:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:58:16.536Z: Starting 5 workers in us-central1-a...
    Oct 14, 2021 6:58:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:58:32.745Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 14, 2021 6:59:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:59:00.221Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 14, 2021 6:59:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:59:25.107Z: Workers have started successfully.
    Oct 14, 2021 6:59:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:59:25.132Z: Workers have started successfully.
    Oct 14, 2021 6:59:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:59:52.465Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 14, 2021 6:59:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:59:52.588Z: Cleaning up.
    Oct 14, 2021 6:59:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T18:59:52.658Z: Stopping worker pool...
    Oct 14, 2021 7:02:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T19:02:07.954Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 14, 2021 7:02:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T19:02:07.991Z: Worker pool stopped.
    Oct 14, 2021 7:02:14 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-14_11_58_08-9255479186759955978 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c6204950-9d31-4e6b-86cf-2ef768541a0f and timestamp: 2021-10-14T19:02:14.579000000Z:
                     Metric:                    Value:
                   read_time                     8.067
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 14, 2021 7:02:15 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 12 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 23.941 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 17m 49s
152 actionable tasks: 143 executed, 9 from cache

Publishing build scan...
https://gradle.com/s/aooopaliq5whw

Stopped 11 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2544

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2544/display/redirect>

Changes:


------------------------------------------
[...truncated 337.13 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 14, 2021 12:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 14, 2021 12:44:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 14, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 14, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 14, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 14, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 14, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 14, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 14, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 14, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 14, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 14, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 14, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 14, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 14, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 14, 2021 12:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 14, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 14, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 14, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2182681013556753547.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-1uzYiVEanmnzAkA_HHPUF6K7JZWR7NpocHAGTnRM4zc.jar
    Oct 14, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 14, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 14, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash a2d810cfe9b69e6de2078cdd4fc6550c8817202c176f6c39ee8a4cbb48e3c6c6> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-otgQz-m2nm3iB4zdT8ZVDIgXICwXb2w57opMu0jjxsY.pb
    Oct 14, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 14, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 14, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 14, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 14, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-14_05_45_06-10948096631192102164?project=apache-beam-testing
    Oct 14, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-14_05_45_06-10948096631192102164
    Oct 14, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-14_05_45_06-10948096631192102164
    Oct 14, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-14T12:45:09.946Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 14, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:45:18.349Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 14, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:45:19.164Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 14, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:45:19.208Z: Expanding GroupByKey operations into optimizable parts.
    Oct 14, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:45:19.233Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 14, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:45:19.307Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 14, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:45:19.332Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 14, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:45:19.362Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 14, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:45:19.687Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 14, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:45:19.766Z: Starting 5 workers in us-central1-a...
    Oct 14, 2021 12:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:45:45.138Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 14, 2021 12:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:46:03.766Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 14, 2021 12:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:46:29.218Z: Workers have started successfully.
    Oct 14, 2021 12:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:46:29.248Z: Workers have started successfully.
    Oct 14, 2021 12:46:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:46:57.508Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 14, 2021 12:46:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:46:57.624Z: Cleaning up.
    Oct 14, 2021 12:46:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:46:57.697Z: Stopping worker pool...
    Oct 14, 2021 12:49:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:49:22.538Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 14, 2021 12:49:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T12:49:22.573Z: Worker pool stopped.
    Oct 14, 2021 12:49:28 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-14_05_45_06-10948096631192102164 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6de2c5ce-7bad-46ed-8fef-6e1fa2a2a539 and timestamp: 2021-10-14T12:49:28.636000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.928

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 14, 2021 12:49:29 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 38.598 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 8s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/lbzaidrfvsmni

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2543

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2543/display/redirect?page=changes>

Changes:

[kawaigin] Prepare release of apache-beam-jupyterlab-sidepanel v2.0.0


------------------------------------------
[...truncated 340.68 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 14, 2021 6:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 14, 2021 6:44:53 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 14, 2021 6:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 14, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 14, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 14, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 14, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 14, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 14, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 14, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 14, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 14, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 14, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 14, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 14, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 14, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 14, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 14, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 14, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2812133051463427131.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-6beI723LVZS0pLxVGEBnw6zewY5KmzSVKGcVErlfRNk.jar
    Oct 14, 2021 6:45:15 AM org.apache.beam.sdk.metrics.MetricsEnvironment getCurrentContainer
    WARNING: Reporting metrics are not supported in the current execution environment.
    Oct 14, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 10 seconds
    Oct 14, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 14, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 03522e21521e8fc374c82ce2a40b97d2f2fce5f0b842b23d20a4d35d1c1efee3> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-A1IuIVIej8N0yCzipAuX0vL85fC4QrI9IKTTXRwe_uM.pb
    Oct 14, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 14, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 14, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 14, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 14, 2021 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-13_23_45_18-10186905973464468890?project=apache-beam-testing
    Oct 14, 2021 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-13_23_45_18-10186905973464468890
    Oct 14, 2021 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-13_23_45_18-10186905973464468890
    Oct 14, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-14T06:45:21.917Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 14, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:45:27.003Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 14, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:45:27.729Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 14, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:45:27.757Z: Expanding GroupByKey operations into optimizable parts.
    Oct 14, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:45:27.778Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 14, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:45:27.821Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 14, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:45:27.838Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 14, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:45:27.872Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 14, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:45:28.170Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 14, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:45:28.242Z: Starting 5 workers in us-central1-a...
    Oct 14, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:45:33.234Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 14, 2021 6:46:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:46:04.503Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Oct 14, 2021 6:46:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:46:04.535Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Oct 14, 2021 6:46:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:46:14.792Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 14, 2021 6:46:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:46:39.026Z: Workers have started successfully.
    Oct 14, 2021 6:46:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:46:39.055Z: Workers have started successfully.
    Oct 14, 2021 6:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:47:06.628Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 14, 2021 6:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:47:06.830Z: Cleaning up.
    Oct 14, 2021 6:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:47:06.886Z: Stopping worker pool...
    Oct 14, 2021 6:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:49:25.738Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 14, 2021 6:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T06:49:25.775Z: Worker pool stopped.
    Oct 14, 2021 6:49:30 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-13_23_45_18-10186905973464468890 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c47a9bd9-de82-4ab7-b88d-2ee2e9969c57 and timestamp: 2021-10-14T06:49:30.975000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.707

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 14, 2021 6:49:31 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 43.38 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 13s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/cbbmmbx4gsrsa

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2542

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2542/display/redirect?page=changes>

Changes:

[Udi Meiri] Release guide: minor updates

[noreply] [BEAM-12644] Javascript files moved to assets (#15653)


------------------------------------------
[...truncated 342.21 KB...]
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 14, 2021 12:46:38 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 14, 2021 12:46:42 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 14, 2021 12:46:43 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 14, 2021 12:46:51 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 12:46:51 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 12:46:53 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 14, 2021 12:46:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 12:46:53 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 12:46:54 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 14, 2021 12:46:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@372074801]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 14, 2021 12:46:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 14, 2021 12:46:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 12:46:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 14, 2021 12:46:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 14, 2021 12:46:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 12:46:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 14, 2021 12:46:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1712537381]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 14, 2021 12:46:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 12:46:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 12:46:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 14, 2021 12:46:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 14, 2021 12:46:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 14, 2021 12:46:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 14, 2021 12:46:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 14, 2021 12:46:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 14, 2021 12:47:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 14, 2021 12:47:06 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 14, 2021 12:47:06 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 14, 2021 12:47:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3586008666161472238.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-pNWakzO7_S1qqCXYC1QPshbxn3hygdZzQNJ3WkDEcto.jar
    Oct 14, 2021 12:47:11 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 5 seconds
    Oct 14, 2021 12:47:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 14, 2021 12:47:12 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash b74abe21a4e2a14b1a671023e8f87a0cc70403e0506ae3501194dc1a6050d8d3> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-t0q-IaTioUsaZxAj6Ph6DMcEA-BQauNQEZTcGmBQ2NM.pb
    Oct 14, 2021 12:47:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 14, 2021 12:47:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 14, 2021 12:47:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 14, 2021 12:47:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 14, 2021 12:47:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-13_17_47_17-4909132220303045062?project=apache-beam-testing
    Oct 14, 2021 12:47:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-13_17_47_17-4909132220303045062
    Oct 14, 2021 12:47:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-13_17_47_17-4909132220303045062
    Oct 14, 2021 12:47:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-14T00:47:20.452Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 14, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:47:27.790Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 14, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:47:28.482Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 14, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:47:28.514Z: Expanding GroupByKey operations into optimizable parts.
    Oct 14, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:47:28.546Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 14, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:47:28.609Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 14, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:47:28.635Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 14, 2021 12:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:47:28.670Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 14, 2021 12:47:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:47:28.963Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 14, 2021 12:47:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:47:29.032Z: Starting 5 workers in us-central1-a...
    Oct 14, 2021 12:47:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:47:53.342Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 14, 2021 12:48:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:48:13.924Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 14, 2021 12:48:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:48:42.653Z: Workers have started successfully.
    Oct 14, 2021 12:48:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:48:42.684Z: Workers have started successfully.
    Oct 14, 2021 12:49:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:49:08.075Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 14, 2021 12:49:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:49:08.247Z: Cleaning up.
    Oct 14, 2021 12:49:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:49:08.337Z: Stopping worker pool...
    Oct 14, 2021 12:51:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:51:36.932Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 14, 2021 12:51:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-14T00:51:36.972Z: Worker pool stopped.
    Oct 14, 2021 12:51:43 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-13_17_47_17-4909132220303045062 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 470870de-9f18-49e4-8be5-bba995fae54a and timestamp: 2021-10-14T00:51:43.050000000Z:
                     Metric:                    Value:
                   read_time                     6.984
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 14, 2021 12:51:43 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.035 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 5 mins 17.345 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 1s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/vgs62wac2ebjy

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2541

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2541/display/redirect?page=changes>

Changes:

[dmitrii_kuzin] [BEAM-12730] Custom delimiter always bytes

[noreply] Minor: Add base_func to Series.aggregate expression names (#15706)

[noreply] [BEAM-13013] Extend cross language expansion. (#15698)

[noreply] Minor: Clarify confusing boolean logic for selecting PerWindowInvoker

[noreply] [BEAM-4424] Execute hooks in enabling order (#15713)

[noreply] [BEAM-11097] Update cache with metrics counting, windowing (#15717)


------------------------------------------
[...truncated 337.50 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 13, 2021 6:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 13, 2021 6:44:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 13, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 13, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 13, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 13, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 13, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 13, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 13, 2021 6:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 13, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 13, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 13, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8420574861511368776.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-P2Bhkz_sJfL7Q44uCBCqe2Xl4LF6Avhx3qXt_9Nkgss.jar
    Oct 13, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 13, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 13, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash f0b712b6dd7ee126e924bf9d0f3f6ffc38d2a1f50187947e10c8b45c2f6700bd> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-8LcStt1-4SbpJL-dDz9v_DjSofUBh5R-EMi0XC9nAL0.pb
    Oct 13, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 13, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 13, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 13, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 13, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-13_11_45_08-7133238093593566270?project=apache-beam-testing
    Oct 13, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-13_11_45_08-7133238093593566270
    Oct 13, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-13_11_45_08-7133238093593566270
    Oct 13, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-13T18:45:11.818Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 13, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:45:17.665Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 13, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:45:18.408Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 13, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:45:18.439Z: Expanding GroupByKey operations into optimizable parts.
    Oct 13, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:45:18.467Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 13, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:45:18.537Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 13, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:45:18.573Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 13, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:45:18.597Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 13, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:45:18.919Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 13, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:45:18.986Z: Starting 5 workers in us-central1-a...
    Oct 13, 2021 6:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:45:28.162Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 13, 2021 6:46:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:46:16.983Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 13, 2021 6:46:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:46:42.016Z: Workers have started successfully.
    Oct 13, 2021 6:46:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:46:42.044Z: Workers have started successfully.
    Oct 13, 2021 6:47:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:47:11.336Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 13, 2021 6:47:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:47:11.505Z: Cleaning up.
    Oct 13, 2021 6:47:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:47:11.581Z: Stopping worker pool...
    Oct 13, 2021 6:49:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:49:39.615Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 13, 2021 6:49:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T18:49:39.658Z: Worker pool stopped.
    Oct 13, 2021 6:49:46 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-13_11_45_08-7133238093593566270 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 59461b1a-5fe0-47e5-888d-d482f1a0e346 and timestamp: 2021-10-13T18:49:46.513000000Z:
                     Metric:                    Value:
                   read_time                     8.923
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 13, 2021 6:49:46 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 57.197 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 29s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ejnkppnwz66y6

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2540

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2540/display/redirect>

Changes:


------------------------------------------
[...truncated 346.76 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 4 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 4'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 4'
Successfully started process 'Gradle Test Executor 4'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 13, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 13, 2021 12:45:22 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 13, 2021 12:45:23 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 13, 2021 12:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 12:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 12:45:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 13, 2021 12:45:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 12:45:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 12:45:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 13, 2021 12:45:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@793252937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 12:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 13, 2021 12:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 13, 2021 12:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 13, 2021 12:45:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 13, 2021 12:45:33 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 13, 2021 12:45:33 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 13, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6472796556154093260.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-6-OpaFPbxZfQnvZ74BQbvd3dTKRJrYuR9bFPtZrnM04.jar
    Oct 13, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 1 seconds
    Oct 13, 2021 12:45:34 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 13, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 9083bf457d324a2e69be7624cd67d20503d4061abda0914c7de495e5a4f7e74f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-kIO_RX0ySi5pvnYkzWfSBQPUBhq9oJFMfeSV5aT3508.pb
    Oct 13, 2021 12:45:37 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 13, 2021 12:45:37 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 13, 2021 12:45:37 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 13, 2021 12:45:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 13, 2021 12:45:38 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-13_05_45_37-13725460614555904800?project=apache-beam-testing
    Oct 13, 2021 12:45:38 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-13_05_45_37-13725460614555904800
    Oct 13, 2021 12:45:38 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-13_05_45_37-13725460614555904800
    Oct 13, 2021 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-13T12:45:40.850Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 13, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:45:45.122Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 13, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:45:45.914Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 13, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:45:45.952Z: Expanding GroupByKey operations into optimizable parts.
    Oct 13, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:45:45.982Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 13, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:45:46.053Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 13, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:45:46.074Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 13, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:45:46.096Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 13, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:45:46.401Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 13, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:45:46.462Z: Starting 5 workers in us-central1-a...
    Oct 13, 2021 12:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:45:57.500Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 13, 2021 12:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:46:31.975Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 13, 2021 12:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:47:01.126Z: Workers have started successfully.
    Oct 13, 2021 12:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:47:01.154Z: Workers have started successfully.
    Oct 13, 2021 12:47:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:47:26.826Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 13, 2021 12:47:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:47:26.947Z: Cleaning up.
    Oct 13, 2021 12:47:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:47:27.019Z: Stopping worker pool...
    Oct 13, 2021 12:50:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:50:00.229Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 13, 2021 12:50:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T12:50:00.292Z: Worker pool stopped.
    Oct 13, 2021 12:50:06 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-13_05_45_37-13725460614555904800 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 134018ea-e282-4aed-a499-d629c31e0489 and timestamp: 2021-10-13T12:50:06.801000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.658

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 13, 2021 12:50:07 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 4 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 49.417 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 47s
152 actionable tasks: 102 executed, 50 from cache

Publishing build scan...
https://gradle.com/s/xsdl7nfgw72cu

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2539

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2539/display/redirect?page=changes>

Changes:

[noreply] [BEAM-12513] Add JIRAs for missing Go SDK features (#15658)


------------------------------------------
[...truncated 338.53 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 13, 2021 6:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 13, 2021 6:44:55 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 13, 2021 6:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 13, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 13, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 13, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@793252937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 13, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 13, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 13, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 13, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 13, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 13, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5329754033931223608.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-eg3j8nxroWsXWhvqR6sqs5vs332QzaKjD1vsyC9Z9YE.jar
    Oct 13, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 13, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 13, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104889 bytes, hash 26b54604c4b76432d3c68fd9600bd0ceb52d705abda6f6bad0fb171a3da65ab8> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-JrVGBMS3ZDLTxo_ZYAvQzrUtcFq9pva60PsXGj2mWrg.pb
    Oct 13, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 13, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 13, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 13, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 13, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-12_23_45_08-16050976645542794067?project=apache-beam-testing
    Oct 13, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-12_23_45_08-16050976645542794067
    Oct 13, 2021 6:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-12_23_45_08-16050976645542794067
    Oct 13, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-13T06:45:14.702Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 13, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:45:22.328Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 13, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:45:23.090Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 13, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:45:23.130Z: Expanding GroupByKey operations into optimizable parts.
    Oct 13, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:45:23.164Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 13, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:45:23.262Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 13, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:45:23.324Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 13, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:45:23.433Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 13, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:45:24.003Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 13, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:45:24.090Z: Starting 5 workers in us-central1-a...
    Oct 13, 2021 6:45:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:45:47.036Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 13, 2021 6:46:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:46:11.991Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 13, 2021 6:46:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:46:37.849Z: Workers have started successfully.
    Oct 13, 2021 6:46:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:46:37.930Z: Workers have started successfully.
    Oct 13, 2021 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:47:04.919Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 13, 2021 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:47:05.051Z: Cleaning up.
    Oct 13, 2021 6:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:47:05.152Z: Stopping worker pool...
    Oct 13, 2021 6:49:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:49:28.840Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 13, 2021 6:49:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T06:49:28.887Z: Worker pool stopped.
    Oct 13, 2021 6:50:29 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-12_23_45_08-16050976645542794067 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 36707a6e-b0ec-428a-b3ca-5f3bcf4395e4 and timestamp: 2021-10-13T06:50:29.606000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.513

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 13, 2021 6:50:29 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 5 mins 40.216 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 11s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/nyuyzlzxcc3aw

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2538

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2538/display/redirect?page=changes>

Changes:

[cgray] [BEAM-7169] Fix singleton assertion in PAssert javadoc.

[noreply] Revert "[BEAM-10913] - Forcing update of YAML file by running kubectl

[kawaigin] [BEAM-12997] Upgraded labextension to V3


------------------------------------------
[...truncated 339.08 KB...]
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 13, 2021 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 13, 2021 12:44:52 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 13, 2021 12:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 13, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 13, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 13, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 13, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 13, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 13, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 13, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 13, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 13, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 13, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 13, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 13, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 13, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 13, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 13, 2021 12:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 13, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 13, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 13, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5890577408425665627.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ovOpIMFGfYMPgXz9nCHRcqfUtqQ9XmzJ40yBKdUcEAA.jar
    Oct 13, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 13, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 13, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 3d32be64863b6479ae02f48feaeb168c366b2a73d3888d04661245ff17bcd1fc> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-PTK-ZIY7ZHmuAvSP6usWjDZrKnPTiI0EZhJF_xe80fw.pb
    Oct 13, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 13, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 13, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 13, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 13, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-12_17_45_05-1080144184160942624?project=apache-beam-testing
    Oct 13, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-12_17_45_05-1080144184160942624
    Oct 13, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-12_17_45_05-1080144184160942624
    Oct 13, 2021 12:45:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-13T00:45:09.007Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 13, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:45:15.973Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 13, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:45:16.784Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 13, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:45:16.871Z: Expanding GroupByKey operations into optimizable parts.
    Oct 13, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:45:16.908Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 13, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:45:16.994Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 13, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:45:17.022Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 13, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:45:17.053Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 13, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:45:17.515Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 13, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:45:17.605Z: Starting 5 workers in us-central1-c...
    Oct 13, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:45:44.250Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 13, 2021 12:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:45:51.425Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 13, 2021 12:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:45:51.450Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 13, 2021 12:46:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:46:01.702Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 13, 2021 12:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:46:26.111Z: Workers have started successfully.
    Oct 13, 2021 12:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:46:26.144Z: Workers have started successfully.
    Oct 13, 2021 12:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:46:56.104Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 13, 2021 12:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:46:56.327Z: Cleaning up.
    Oct 13, 2021 12:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:46:56.423Z: Stopping worker pool...
    Oct 13, 2021 12:49:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:49:19.554Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 13, 2021 12:49:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-13T00:49:19.595Z: Worker pool stopped.
    Oct 13, 2021 12:49:25 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-12_17_45_05-1080144184160942624 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): cc740fa4-6677-4c79-af50-065edcf5ff5a and timestamp: 2021-10-13T00:49:25.658000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.499

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 13, 2021 12:49:26 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 38.55 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 9s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/x2yytbifuhxm6

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2537

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2537/display/redirect?page=changes>

Changes:

[heejong] Update Dataflow Python container tag

[noreply] Merge pull request #15667 from [BEAM-12730] Add custom delimiters to

[noreply] Merge pull request #15700 from [BEAM-12987] [Playground] Add page UI


------------------------------------------
[...truncated 338.26 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 12, 2021 6:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 12, 2021 6:44:59 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 12, 2021 6:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 12, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 12, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 12, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 12, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 12, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 12, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 12, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 12, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 12, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 12, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 12, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 12, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 12, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 12, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 12, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 12, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test9156011587531586222.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-G5FJjMI4Zb3G-2y_r_qh-cNidmMkm8Mkw7mRybaDqzc.jar
    Oct 12, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 12, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 12, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash cada289d233b484d41187a59ee827447e170ebe67ae40dbc9f82b02944aaf846> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-ytoonSM7SE1BGHpZ7oJ0R-Fw6-Z65A28n4KwKUSq-EY.pb
    Oct 12, 2021 6:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 12, 2021 6:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 12, 2021 6:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 12, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 12, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-12_11_45_12-3882144811092247120?project=apache-beam-testing
    Oct 12, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-12_11_45_12-3882144811092247120
    Oct 12, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-12_11_45_12-3882144811092247120
    Oct 12, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-12T18:45:15.138Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 12, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:45:49.229Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 12, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:45:49.998Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 12, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:45:50.032Z: Expanding GroupByKey operations into optimizable parts.
    Oct 12, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:45:50.064Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 12, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:45:50.122Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 12, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:45:50.158Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 12, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:45:50.183Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 12, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:45:50.462Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 12, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:45:50.534Z: Starting 5 workers in us-central1-a...
    Oct 12, 2021 6:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:45:57.251Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 12, 2021 6:46:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:46:33.207Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 12, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:46:58.325Z: Workers have started successfully.
    Oct 12, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:46:58.360Z: Workers have started successfully.
    Oct 12, 2021 6:47:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:47:24.575Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 12, 2021 6:47:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:47:24.718Z: Cleaning up.
    Oct 12, 2021 6:47:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:47:24.778Z: Stopping worker pool...
    Oct 12, 2021 6:49:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:49:46.392Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 12, 2021 6:49:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T18:49:46.437Z: Worker pool stopped.
    Oct 12, 2021 6:49:52 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-12_11_45_12-3882144811092247120 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6cfe67a8-8759-499b-8a38-a5522f455ed2 and timestamp: 2021-10-12T18:49:52.693000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.854

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 12, 2021 6:49:53 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 58.361 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 31s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/cyvoyr2vpez3i

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2536

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2536/display/redirect>

Changes:


------------------------------------------
[...truncated 338.81 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 12, 2021 12:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 12, 2021 12:44:52 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 12, 2021 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 12, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 12, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 12, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 12, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 12, 2021 12:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 12, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 12, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 12, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7895948252583150716.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-arSWOV_dfe951arcAp7L8XDletVdhHzyc8DIuEdjsFI.jar
    Oct 12, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 12, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 12, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 41fcc841ddfc9ef59ae9760a31a90d082317583a9adbe6d9b045936d1e56e399> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-QfzIQd38nvWa6XYKMakNCCMXWDqa2-bZsEWTbR5W45k.pb
    Oct 12, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 12, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 12, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 12, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 12, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-12_05_45_05-2662568501819662847?project=apache-beam-testing
    Oct 12, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-12_05_45_05-2662568501819662847
    Oct 12, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-12_05_45_05-2662568501819662847
    Oct 12, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-12T12:45:09.039Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 12, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:45:15.881Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 12, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:45:16.740Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 12, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:45:16.786Z: Expanding GroupByKey operations into optimizable parts.
    Oct 12, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:45:16.826Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 12, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:45:16.884Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 12, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:45:16.916Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 12, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:45:16.951Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 12, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:45:17.332Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 12, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:45:17.408Z: Starting 5 workers in us-central1-c...
    Oct 12, 2021 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:45:49.636Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 12, 2021 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:45:50.458Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Oct 12, 2021 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:45:50.488Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Oct 12, 2021 12:46:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:46:00.737Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 12, 2021 12:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:46:24.496Z: Workers have started successfully.
    Oct 12, 2021 12:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:46:24.530Z: Workers have started successfully.
    Oct 12, 2021 12:46:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:46:53.321Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 12, 2021 12:46:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:46:53.532Z: Cleaning up.
    Oct 12, 2021 12:46:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:46:53.630Z: Stopping worker pool...
    Oct 12, 2021 12:49:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:49:17.532Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 12, 2021 12:49:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T12:49:17.600Z: Worker pool stopped.
    Oct 12, 2021 12:49:24 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-12_05_45_05-2662568501819662847 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8d27dced-5f5c-4692-850e-013bd0a03311 and timestamp: 2021-10-12T12:49:24.938000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.291

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 12, 2021 12:49:25 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 36.719 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 6s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/wor7ifwppuxya

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2535

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2535/display/redirect?page=changes>

Changes:

[noreply] Release 2.33 documentation update (#15543)

[noreply] Merge pull request #15441 from [BEAM-8823] Make FnApiRunner work by


------------------------------------------
[...truncated 339.65 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 12, 2021 6:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 12, 2021 6:45:00 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 12, 2021 6:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 12, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 12, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 12, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 12, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 12, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 12, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 12, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 12, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 12, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 12, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 12, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 12, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 12, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 12, 2021 6:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 12, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 12, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1656578058968929539.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-vB53NHrbQW8QZoMCtAlaqt_fDfHQ9F8X8WYxoWBOEus.jar
    Oct 12, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 12, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 12, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash a1d7903616da2308799c898c85df2e929be1bf94611a7f759b2e34c14cdda2d4> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-odeQNhbaIwh5nImMhd8ukpvhv5RhGn91my40wUzdotQ.pb
    Oct 12, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 12, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 12, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 12, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 12, 2021 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-11_23_45_13-10942643506701589531?project=apache-beam-testing
    Oct 12, 2021 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-11_23_45_13-10942643506701589531
    Oct 12, 2021 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-11_23_45_13-10942643506701589531
    Oct 12, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-12T06:45:17.883Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 12, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:45:29.764Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 12, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:45:30.505Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 12, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:45:30.556Z: Expanding GroupByKey operations into optimizable parts.
    Oct 12, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:45:30.591Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 12, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:45:30.646Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 12, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:45:30.678Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 12, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:45:30.743Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 12, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:45:31.159Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 12, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:45:31.249Z: Starting 5 workers in us-central1-c...
    Oct 12, 2021 6:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:45:52.392Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 12, 2021 6:46:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:46:04.869Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Oct 12, 2021 6:46:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:46:04.906Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Oct 12, 2021 6:46:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:46:15.158Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 12, 2021 6:46:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:46:39.226Z: Workers have started successfully.
    Oct 12, 2021 6:46:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:46:39.256Z: Workers have started successfully.
    Oct 12, 2021 6:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:47:05.582Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 12, 2021 6:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:47:05.744Z: Cleaning up.
    Oct 12, 2021 6:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:47:05.835Z: Stopping worker pool...
    Oct 12, 2021 6:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:49:22.828Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 12, 2021 6:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T06:49:22.868Z: Worker pool stopped.
    Oct 12, 2021 6:49:29 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-11_23_45_13-10942643506701589531 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 05d6afe8-5791-4c81-a089-8ca32e5e56cb and timestamp: 2021-10-12T06:49:29.700000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.784

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 12, 2021 6:49:30 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 34.085 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 5s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/kmemclpwcfima

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2534

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2534/display/redirect?page=changes>

Changes:

[rogan.o.morrow] [BEAM-12875] Register file systems in SparkExecutableStageFunction

[Kyle Weaver] [BEAM-12694] Include datetime in dicom test dataset name.

[noreply] [BEAM-12960] Add extra context for structural DoFn ProcessElement error


------------------------------------------
[...truncated 339.54 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 12, 2021 12:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 12, 2021 12:44:58 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 12, 2021 12:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 12, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 12, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 12, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 12, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 12, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 12, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 12, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 12, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 12, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 12, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 12, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 12, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 12, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 12, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 12, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 12, 2021 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 12, 2021 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 12, 2021 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6089847685525497808.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-j0RlgzwqoJ9IuGP71TKsPPY-HLJ9FhDCNoESendTWzg.jar
    Oct 12, 2021 12:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 12, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 12, 2021 12:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104892 bytes, hash dc0f7feb3f06ad79adde85b473982d38bc7dc1dc917e00a73b32700b1f4e1504> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-3A9_6z8GrXmt3oW0c5gtOLx9wdyRfgCnOzJwCx9OFQQ.pb
    Oct 12, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 12, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 12, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 12, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 12, 2021 12:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-11_17_45_12-3290421013320524464?project=apache-beam-testing
    Oct 12, 2021 12:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-11_17_45_12-3290421013320524464
    Oct 12, 2021 12:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-11_17_45_12-3290421013320524464
    Oct 12, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-12T00:45:15.981Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 12, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:45:22.749Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 12, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:45:23.691Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 12, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:45:23.735Z: Expanding GroupByKey operations into optimizable parts.
    Oct 12, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:45:23.755Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 12, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:45:23.818Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 12, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:45:23.849Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 12, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:45:23.882Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 12, 2021 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:45:24.265Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 12, 2021 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:45:24.347Z: Starting 5 workers in us-central1-c...
    Oct 12, 2021 12:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:45:34.481Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 12, 2021 12:45:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:45:54.613Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Oct 12, 2021 12:45:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:45:54.641Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Oct 12, 2021 12:46:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:46:04.919Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 12, 2021 12:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:46:28.753Z: Workers have started successfully.
    Oct 12, 2021 12:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:46:28.779Z: Workers have started successfully.
    Oct 12, 2021 12:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:46:57.628Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 12, 2021 12:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:46:57.777Z: Cleaning up.
    Oct 12, 2021 12:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:46:57.853Z: Stopping worker pool...
    Oct 12, 2021 12:49:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:49:20.531Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 12, 2021 12:49:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-12T00:49:20.571Z: Worker pool stopped.
    Oct 12, 2021 12:49:26 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-11_17_45_12-3290421013320524464 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d1ffe6f6-3f14-474e-aab9-210557f33f07 and timestamp: 2021-10-12T00:49:26.446000000Z:
                     Metric:                    Value:
                   read_time                     8.046
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 12, 2021 12:49:26 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 33.115 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 5s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/dx2eeycq4b3tm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2533

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2533/display/redirect?page=changes>

Changes:

[rogelio.hernandez] [BEAM-12371][BEAM-12373] Fixed page navigation


------------------------------------------
[...truncated 338.42 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 11, 2021 6:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 11, 2021 6:44:53 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 11, 2021 6:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 11, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 11, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 11, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 11, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 11, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 11, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 11, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 11, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 11, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 11, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 11, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 11, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 11, 2021 6:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 11, 2021 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 11, 2021 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 11, 2021 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4286376879441808855.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-LZNM0OjPKJ8YxiJnDJthhP5e5RkAmgFi1XNpGVXx5GQ.jar
    Oct 11, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 11, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 11, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash d8c0b5f14e368b35fc370d35c4017b16d55a452ebed7138b264fbac650535a02> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-2MC18U42izX8Nw01xAF7FtVaRS6-1xOLJk-6xlBTWgI.pb
    Oct 11, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 11, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 11, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 11, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 11, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-11_11_45_05-18093358603790928327?project=apache-beam-testing
    Oct 11, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-11_11_45_05-18093358603790928327
    Oct 11, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-11_11_45_05-18093358603790928327
    Oct 11, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-11T18:45:08.828Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 11, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:16.276Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 11, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:16.904Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 11, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:16.924Z: Expanding GroupByKey operations into optimizable parts.
    Oct 11, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:16.949Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 11, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:17.007Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 11, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:17.026Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 11, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:17.051Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 11, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:17.359Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 11, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:17.413Z: Starting 5 workers in us-central1-a...
    Oct 11, 2021 6:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:43.092Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 11, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:44.397Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 11, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:44.426Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 11, 2021 6:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:45:54.708Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 11, 2021 6:46:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:46:19.615Z: Workers have started successfully.
    Oct 11, 2021 6:46:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:46:19.641Z: Workers have started successfully.
    Oct 11, 2021 6:46:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:46:48.705Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 11, 2021 6:46:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:46:48.834Z: Cleaning up.
    Oct 11, 2021 6:46:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:46:48.903Z: Stopping worker pool...
    Oct 11, 2021 6:49:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:49:11.811Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 11, 2021 6:49:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T18:49:11.847Z: Worker pool stopped.
    Oct 11, 2021 6:49:17 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-11_11_45_05-18093358603790928327 finished with status DONE.


Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 2bcaf07e-50cc-4f88-b2ad-67a308acc5cd and timestamp: 2021-10-11T18:49:17.896000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.985

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 11, 2021 6:49:18 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 28.983 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 4m 58s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/76oq36igpdb5o

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2532

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2532/display/redirect>

Changes:


------------------------------------------
[...truncated 337.79 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 11, 2021 12:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 11, 2021 12:44:52 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 11, 2021 12:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 11, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 11, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 11, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 11, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 11, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 11, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 11, 2021 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 11, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 11, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 11, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test778636717883802267.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-lKsOXenpkv6QMTNLvkLm8r-EXGjI6lC38EmWyvOVDn8.jar
    Oct 11, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 11, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 11, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104888 bytes, hash 7c6d3cf99c79f389dc609e13d48842d3075d221f2d42ccf0f31b76455950b2e5> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-fG08-Zx584ncYJ4T1IhC0wddIh8tQszw8xt2RVlQsuU.pb
    Oct 11, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 11, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 11, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 11, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 11, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-11_05_45_04-1710193203739782743?project=apache-beam-testing
    Oct 11, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-11_05_45_04-1710193203739782743
    Oct 11, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-11_05_45_04-1710193203739782743
    Oct 11, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-11T12:45:07.518Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 11, 2021 12:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:45:12.466Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 11, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:45:13.187Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 11, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:45:13.225Z: Expanding GroupByKey operations into optimizable parts.
    Oct 11, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:45:13.260Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 11, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:45:13.322Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 11, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:45:13.361Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 11, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:45:13.391Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 11, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:45:13.748Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 11, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:45:13.817Z: Starting 5 workers in us-central1-a...
    Oct 11, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:45:20.881Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 11, 2021 12:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:45:53.869Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 11, 2021 12:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:46:22.354Z: Workers have started successfully.
    Oct 11, 2021 12:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:46:22.389Z: Workers have started successfully.
    Oct 11, 2021 12:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:46:51.120Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 11, 2021 12:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:46:51.253Z: Cleaning up.
    Oct 11, 2021 12:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:46:51.321Z: Stopping worker pool...
    Oct 11, 2021 12:49:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:49:25.496Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 11, 2021 12:49:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T12:49:25.540Z: Worker pool stopped.
    Oct 11, 2021 12:49:30 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-11_05_45_04-1710193203739782743 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): a8ebbd03-21ad-43af-b43f-7aa5e9bf972a and timestamp: 2021-10-11T12:49:30.729000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.623

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 11, 2021 12:49:31 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 43.437 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 13s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/msnw2f5fkfaz6

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2531

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2531/display/redirect>

Changes:


------------------------------------------
[...truncated 337.93 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 11, 2021 6:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 11, 2021 6:44:56 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 11, 2021 6:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 11, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 11, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 11, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 11, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 11, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 11, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 11, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 11, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 11, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 11, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 11, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 11, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 11, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 11, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1003844912941934074.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-dd-ZacZ0zsfWNkqOsDmw4e1TJ2wG79wAi6KgZu2aVD8.jar
    Oct 11, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 11, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 11, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash c1ac01cffce0988c39de57b60bc40d02a652a96211e13281de9e170668b7426e> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-wawBz_zgmIw53le2C8QNAqZSqWIR4TKB3p4XBmi3Qm4.pb
    Oct 11, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 11, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 11, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 11, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 11, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-10_23_45_09-8607510149780975094?project=apache-beam-testing
    Oct 11, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-10_23_45_09-8607510149780975094
    Oct 11, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-10_23_45_09-8607510149780975094
    Oct 11, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-11T06:45:13.115Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 11, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:45:18.609Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 11, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:45:19.161Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 11, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:45:19.206Z: Expanding GroupByKey operations into optimizable parts.
    Oct 11, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:45:19.237Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 11, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:45:19.293Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 11, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:45:19.318Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 11, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:45:19.349Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 11, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:45:19.674Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 11, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:45:19.735Z: Starting 5 workers in us-central1-a...
    Oct 11, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:45:40.128Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 11, 2021 6:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:46:00.053Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 11, 2021 6:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:46:25.928Z: Workers have started successfully.
    Oct 11, 2021 6:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:46:25.951Z: Workers have started successfully.
    Oct 11, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:46:53.973Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 11, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:46:54.085Z: Cleaning up.
    Oct 11, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:46:54.140Z: Stopping worker pool...
    Oct 11, 2021 6:49:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:49:15.456Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 11, 2021 6:49:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T06:49:15.499Z: Worker pool stopped.
    Oct 11, 2021 6:49:21 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-10_23_45_09-8607510149780975094 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 42eded31-3dcc-4550-b08b-ba58c9fd83c4 and timestamp: 2021-10-11T06:49:21.759000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.357

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 11, 2021 6:49:22 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 30.191 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 2s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/pf46l4npmi4u6

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2530

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2530/display/redirect>

Changes:


------------------------------------------
[...truncated 339.46 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 11, 2021 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 11, 2021 12:44:52 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 11, 2021 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 11, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 11, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 11, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 11, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 11, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 11, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 11, 2021 12:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 11, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 11, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 11, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4218387626765777387.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-OqCx62PIOcIQyNk2n9dSMNWhhAjS40_CeL6Cs7lQYYU.jar
    Oct 11, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.thrift/libthrift/0.14.1/85348a0c44c298bbec5ae747e67ae12e60b3aef6/libthrift-0.14.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/libthrift-0.14.1-WzUQ_nLm8HJeKc7269seq6zMxp15_E7VC2gWAKh2Z-w.jar
    Oct 11, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.tomcat/tomcat-annotations-api/8.5.46/56c67699de192c603afd6f029e80e5ff8d98e7e9/tomcat-annotations-api-8.5.46.jar to gs://temp-storage-for-perf-tests/loadtests/staging/tomcat-annotations-api-8.5.46-amtG0OaVhkRRTAyjZYs7B-YSOmgqIO4203lSQnNfq8M.jar
    Oct 11, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/8.5.46/5d686394334d143f48251827435ab086a161e75e/tomcat-embed-core-8.5.46.jar to gs://temp-storage-for-perf-tests/loadtests/staging/tomcat-embed-core-8.5.46-vl-FREjS7l1uADb-srT3ExYweaG2uaepdQjlWRetNcI.jar
    Oct 11, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 246 files cached, 4 files newly uploaded in 0 seconds
    Oct 11, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 11, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 95263ea5b7317c83ac9660b3d2b6f1c92409ff428a2713ed9f3c00c0ea702db7> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-lSY-pbcxfIOslmCz0rbxySQJ_0KKJxPtnzwAwOpwLbc.pb
    Oct 11, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 11, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 11, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 11, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 11, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-10_17_45_04-6294139650922413111?project=apache-beam-testing
    Oct 11, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-10_17_45_04-6294139650922413111
    Oct 11, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-10_17_45_04-6294139650922413111
    Oct 11, 2021 12:45:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-11T00:45:07.738Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 11, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:13.816Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 11, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:14.579Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 11, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:14.619Z: Expanding GroupByKey operations into optimizable parts.
    Oct 11, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:14.655Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 11, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:14.732Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 11, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:14.752Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 11, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:14.776Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 11, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:15.128Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 11, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:15.194Z: Starting 5 workers in us-central1-c...
    Oct 11, 2021 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:40.712Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 11, 2021 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:45.036Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 11, 2021 12:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:45.055Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 11, 2021 12:45:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:45:55.342Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 11, 2021 12:46:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:46:20.260Z: Workers have started successfully.
    Oct 11, 2021 12:46:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:46:20.294Z: Workers have started successfully.
    Oct 11, 2021 12:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:46:51.865Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 11, 2021 12:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:46:52.022Z: Cleaning up.
    Oct 11, 2021 12:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:46:52.096Z: Stopping worker pool...
    Oct 11, 2021 12:49:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:49:14.300Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 11, 2021 12:49:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-11T00:49:14.495Z: Worker pool stopped.
    Oct 11, 2021 12:49:21 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-10_17_45_04-6294139650922413111 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 05c48868-95eb-4f51-98cd-81588487acc7 and timestamp: 2021-10-11T00:49:21.537000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.828

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 11, 2021 12:49:21 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 33.878 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 3s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ifyxiyzfltgzy

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2529

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2529/display/redirect>

Changes:


------------------------------------------
[...truncated 339.05 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 10, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 10, 2021 6:44:55 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 10, 2021 6:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 10, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 10, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 10, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 10, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 10, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 10, 2021 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 10, 2021 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 10, 2021 6:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3532494700713359605.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ZPgtTCDcy_jy0olujt5ro-El35vukIyHbo8kpFGe-wE.jar
    Oct 10, 2021 6:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 10, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 10, 2021 6:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 7e63006485f5391bb51d6d13a7b3c1593fe22df7a537f7e61d6c7808099e03d3> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-fmMAZIX1ORu1HW0Tp7PBWT_iLfelN_fmHWx4CAmeA9M.pb
    Oct 10, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 10, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 10, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 10, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 10, 2021 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-10_11_45_09-12472628253534294307?project=apache-beam-testing
    Oct 10, 2021 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-10_11_45_09-12472628253534294307
    Oct 10, 2021 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-10_11_45_09-12472628253534294307
    Oct 10, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-10T18:45:12.793Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 10, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:45:19.110Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 10, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:45:19.772Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 10, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:45:19.806Z: Expanding GroupByKey operations into optimizable parts.
    Oct 10, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:45:19.831Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 10, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:45:19.914Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 10, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:45:19.945Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 10, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:45:19.978Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 10, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:45:20.299Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 10, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:45:20.365Z: Starting 5 workers in us-central1-a...
    Oct 10, 2021 6:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:45:29.810Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 10, 2021 6:45:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:45:55.494Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Oct 10, 2021 6:45:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:45:55.530Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Oct 10, 2021 6:46:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:46:05.793Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 10, 2021 6:46:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:46:29.308Z: Workers have started successfully.
    Oct 10, 2021 6:46:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:46:29.331Z: Workers have started successfully.
    Oct 10, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:46:57.900Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 10, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:46:58.034Z: Cleaning up.
    Oct 10, 2021 6:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:46:58.137Z: Stopping worker pool...
    Oct 10, 2021 6:49:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:49:18.101Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 10, 2021 6:49:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T18:49:18.130Z: Worker pool stopped.
    Oct 10, 2021 6:49:26 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-10_11_45_09-12472628253534294307 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 5d20fcb0-e9ac-468b-b2b2-cf607b57aeb4 and timestamp: 2021-10-10T18:49:26.057000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.163

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 10, 2021 6:49:26 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 35.464 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 7s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/vz7tgyguu7d46

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2528

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2528/display/redirect>

Changes:


------------------------------------------
[...truncated 338.49 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 10, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 10, 2021 12:44:56 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 10, 2021 12:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 10, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 10, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 10, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 10, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 10, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 10, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 10, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 10, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@793252937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 10, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 10, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 10, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 10, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 10, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 10, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 10, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 10, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1895807884124141554.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ERQ9yF0p56hwD1RWzYwEiAM_g4s2mISho-61HNAs5KU.jar
    Oct 10, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 10, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 10, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash de07d5a1f7805377cb4931e4574f19ae2112c9837fe6f3413354dda332cb2c30> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-3gfVofeAU3fLSTHkV08ZriESyYN_5vNBM1TdozLLLDA.pb
    Oct 10, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 10, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 10, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 10, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 10, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-10_05_45_09-2904342065627502034?project=apache-beam-testing
    Oct 10, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-10_05_45_09-2904342065627502034
    Oct 10, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-10_05_45_09-2904342065627502034
    Oct 10, 2021 12:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-10T12:45:12.297Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 10, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:45:18.367Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 10, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:45:19.055Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 10, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:45:19.105Z: Expanding GroupByKey operations into optimizable parts.
    Oct 10, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:45:19.126Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 10, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:45:19.186Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 10, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:45:19.210Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 10, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:45:19.245Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 10, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:45:19.537Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 10, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:45:19.616Z: Starting 5 workers in us-central1-a...
    Oct 10, 2021 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:45:35.352Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 10, 2021 12:46:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:46:04.443Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 10, 2021 12:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:46:31.118Z: Workers have started successfully.
    Oct 10, 2021 12:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:46:31.163Z: Workers have started successfully.
    Oct 10, 2021 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:46:59.048Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 10, 2021 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:46:59.206Z: Cleaning up.
    Oct 10, 2021 12:47:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:46:59.280Z: Stopping worker pool...
    Oct 10, 2021 12:49:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:49:27.738Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 10, 2021 12:49:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T12:49:27.783Z: Worker pool stopped.
    Oct 10, 2021 12:49:32 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-10_05_45_09-2904342065627502034 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 1c35b425-5ff9-4559-9a91-7aac8090385b and timestamp: 2021-10-10T12:49:33.012000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.716

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 10, 2021 12:49:33 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 41.982 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 15s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/b2cpjsgi5dsco

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2527

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2527/display/redirect>

Changes:


------------------------------------------
[...truncated 339.74 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 10, 2021 6:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 10, 2021 6:45:01 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 10, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 10, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 10, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 10, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 10, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 10, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 10, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 10, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 10, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@793252937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 10, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 10, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 10, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 10, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 10, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 10, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 10, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 10, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2451411109503935012.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-NNFK-BsKFY9axwHRZvvrKPCF9gDEYqqJ7gBHAF8oDZI.jar
    Oct 10, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 10, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 10, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 942a12ffd5f7c57c67d90c09b1d581b1ff4231356abc11a0f96b5317f2605350> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-lCoS_9X3xXxn2QwJsdWBsf9CMTVqvBGg-WtTF_JgU1A.pb
    Oct 10, 2021 6:45:15 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 10, 2021 6:45:15 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 10, 2021 6:45:15 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 10, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 10, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-09_23_45_16-9184379378238017825?project=apache-beam-testing
    Oct 10, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-09_23_45_16-9184379378238017825
    Oct 10, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-09_23_45_16-9184379378238017825
    Oct 10, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-10T06:45:19.351Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 10, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:45:25.639Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 10, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:45:26.316Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 10, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:45:26.355Z: Expanding GroupByKey operations into optimizable parts.
    Oct 10, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:45:26.392Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 10, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:45:26.472Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 10, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:45:26.501Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 10, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:45:26.548Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 10, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:45:26.893Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 10, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:45:26.965Z: Starting 5 workers in us-central1-c...
    Oct 10, 2021 6:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:45:47.868Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 10, 2021 6:46:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:46:07.715Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 10, 2021 6:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:46:32.477Z: Workers have started successfully.
    Oct 10, 2021 6:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:46:32.506Z: Workers have started successfully.
    Oct 10, 2021 6:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:47:01.606Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 10, 2021 6:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:47:01.764Z: Cleaning up.
    Oct 10, 2021 6:47:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:47:01.867Z: Stopping worker pool...
    Oct 10, 2021 6:49:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:49:16.626Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 10, 2021 6:49:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T06:49:16.661Z: Worker pool stopped.
    Oct 10, 2021 6:49:23 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-09_23_45_16-9184379378238017825 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 62fb6eee-6aeb-4ca6-9bd7-cac4c37200f7 and timestamp: 2021-10-10T06:49:23.372000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.419

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 10, 2021 6:49:23 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 26.772 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 5s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/6fm4dycguo2yo

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2526

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2526/display/redirect>

Changes:


------------------------------------------
[...truncated 339.26 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 10, 2021 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 10, 2021 12:44:53 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 10, 2021 12:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 10, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 10, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 10, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 10, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 10, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 10, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 10, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 10, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 10, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 10, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 10, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 10, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 10, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 10, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 10, 2021 12:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 10, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 10, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 10, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4350555590823145877.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-S8hEUruBVC4l4WRy8xgM4mh9koDJZiIwn_wjiJQHuME.jar
    Oct 10, 2021 12:45:13 AM org.apache.beam.sdk.metrics.MetricsEnvironment getCurrentContainer
    WARNING: Reporting metrics are not supported in the current execution environment.
    Oct 10, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 11 seconds
    Oct 10, 2021 12:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 10, 2021 12:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104895 bytes, hash da7538f12b6d1cfe170b19e872238c117d7a99f2f9196b42384201cd72348641> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-2nU48SttHP4XCxnociOMEX16mfL5GWtCOEIBzXI0hkE.pb
    Oct 10, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 10, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 10, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 10, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 10, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-09_17_45_17-11111101773425098417?project=apache-beam-testing
    Oct 10, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-09_17_45_17-11111101773425098417
    Oct 10, 2021 12:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-09_17_45_17-11111101773425098417
    Oct 10, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-10T00:45:20.252Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 10, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:45:26.819Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 10, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:45:27.560Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 10, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:45:27.602Z: Expanding GroupByKey operations into optimizable parts.
    Oct 10, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:45:27.630Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 10, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:45:27.704Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 10, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:45:27.731Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 10, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:45:27.761Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 10, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:45:28.077Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 10, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:45:28.141Z: Starting 5 workers in us-central1-c...
    Oct 10, 2021 12:45:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:45:50.024Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 10, 2021 12:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:45:57.936Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Oct 10, 2021 12:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:45:57.967Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Oct 10, 2021 12:46:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:46:08.231Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 10, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:46:35.013Z: Workers have started successfully.
    Oct 10, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:46:35.046Z: Workers have started successfully.
    Oct 10, 2021 12:47:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:47:09.896Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 10, 2021 12:47:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:47:10.047Z: Cleaning up.
    Oct 10, 2021 12:47:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:47:10.121Z: Stopping worker pool...
    Oct 10, 2021 12:49:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:49:27.359Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 10, 2021 12:49:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-10T00:49:27.394Z: Worker pool stopped.
    Oct 10, 2021 12:49:32 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-09_17_45_17-11111101773425098417 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8997a3ba-c2a6-4ea2-b65e-28fdd0c0a3a6 and timestamp: 2021-10-10T00:49:32.700000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                       7.5

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 10, 2021 12:49:33 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 44.061 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 14s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/njgwonvoougw2

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2525

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2525/display/redirect>

Changes:


------------------------------------------
[...truncated 337.82 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 09, 2021 6:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 09, 2021 6:44:55 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 09, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 09, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 09, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 09, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 09, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 09, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 09, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 09, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 09, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 09, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 09, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 09, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 09, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 09, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 09, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 09, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 09, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8695599148799655228.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-PYDgNAkrANxWBl5Rjor_KQiFweeelrGmrKf8bNXKtHs.jar
    Oct 09, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 09, 2021 6:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 09, 2021 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 07414eb65127ed6ea6c1e88cc3a245556e87e5a23b602e84cc4966d4a0a862d6> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-B0FOtlEn7W6mweiMw6JFVW6H5aI7YC6EzElm1KCoYtY.pb
    Oct 09, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 09, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 09, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 09, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 09, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-09_11_45_13-17975078393602797598?project=apache-beam-testing
    Oct 09, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-09_11_45_13-17975078393602797598
    Oct 09, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-09_11_45_13-17975078393602797598
    Oct 09, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-09T18:45:16.196Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 09, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:45:24.697Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 09, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:45:25.541Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 09, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:45:25.575Z: Expanding GroupByKey operations into optimizable parts.
    Oct 09, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:45:25.622Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 09, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:45:25.695Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 09, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:45:25.724Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 09, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:45:25.758Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 09, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:45:26.094Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 09, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:45:26.171Z: Starting 5 workers in us-central1-a...
    Oct 09, 2021 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:45:38.467Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 09, 2021 6:46:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:46:05.933Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 09, 2021 6:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:46:31.652Z: Workers have started successfully.
    Oct 09, 2021 6:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:46:31.678Z: Workers have started successfully.
    Oct 09, 2021 6:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:47:01.011Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 09, 2021 6:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:47:01.165Z: Cleaning up.
    Oct 09, 2021 6:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:47:01.228Z: Stopping worker pool...
    Oct 09, 2021 6:49:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:49:20.907Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 09, 2021 6:49:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T18:49:20.943Z: Worker pool stopped.
    Oct 09, 2021 6:49:27 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-09_11_45_13-17975078393602797598 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): dbde7288-f025-433b-8031-b2f85252ab44 and timestamp: 2021-10-09T18:49:27.241000000Z:
                     Metric:                    Value:
                   read_time                      9.48
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 09, 2021 6:49:27 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 37.427 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 10s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/7o72a24nidyy2

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2524

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2524/display/redirect>

Changes:


------------------------------------------
[...truncated 337.85 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 09, 2021 12:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 09, 2021 12:44:52 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 09, 2021 12:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 09, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 09, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 09, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 09, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 09, 2021 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 09, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 09, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 09, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6429331046276212595.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-GJg_cz8ZP5cdiPRAulNlDxPvOwcgfzKERqgxI5q_XX4.jar
    Oct 09, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 1 seconds
    Oct 09, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 09, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104888 bytes, hash e452720a9550ad2bd29528c0b3f26c9c04a35c0ebea53c2238e6dcd7940cb26f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-5FJyCpVQrSvSlSjAs_JsnASjXA6-pTwiOObc15QMsm8.pb
    Oct 09, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 09, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 09, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 09, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 09, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-09_05_45_05-17578209071882798612?project=apache-beam-testing
    Oct 09, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-09_05_45_05-17578209071882798612
    Oct 09, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-09_05_45_05-17578209071882798612
    Oct 09, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-09T12:45:08.767Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 09, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:45:14.684Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 09, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:45:15.367Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 09, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:45:15.414Z: Expanding GroupByKey operations into optimizable parts.
    Oct 09, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:45:15.454Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 09, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:45:15.696Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 09, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:45:15.758Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 09, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:45:15.821Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 09, 2021 12:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:45:16.209Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 09, 2021 12:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:45:16.279Z: Starting 5 workers in us-central1-c...
    Oct 09, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:45:24.743Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 09, 2021 12:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:45:55.109Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 09, 2021 12:46:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:46:20.437Z: Workers have started successfully.
    Oct 09, 2021 12:46:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:46:20.482Z: Workers have started successfully.
    Oct 09, 2021 12:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:46:50.055Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 09, 2021 12:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:46:50.204Z: Cleaning up.
    Oct 09, 2021 12:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:46:50.287Z: Stopping worker pool...
    Oct 09, 2021 12:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:49:16.051Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 09, 2021 12:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T12:49:16.093Z: Worker pool stopped.
    Oct 09, 2021 12:49:22 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-09_05_45_05-17578209071882798612 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4ec63783-fd4a-4c71-821e-ded71f0031c4 and timestamp: 2021-10-09T12:49:22.247000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.405

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 09, 2021 12:49:22 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 34.782 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 4s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/6me5cuxycx4v2

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2523

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2523/display/redirect>

Changes:


------------------------------------------
[...truncated 337.99 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 09, 2021 6:44:50 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 09, 2021 6:44:51 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 09, 2021 6:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 09, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 09, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 09, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 09, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 09, 2021 6:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 09, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 09, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 09, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7434095934576527111.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-6R1Ctu7wsxDoH_2sFKHxePjiL4CP_7Dd6l4Qkl6r7Tk.jar
    Oct 09, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 09, 2021 6:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 09, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash f04c910f7290ecba36f460c1f4c0d907204147f4c663c395dcb183a9d4b99be1> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-8EyRD3KQ7Lo29GDB9MDZByBBR_TGY8OV3LGDqdS5m-E.pb
    Oct 09, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 09, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 09, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 09, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 09, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-08_23_45_03-16159411720345908142?project=apache-beam-testing
    Oct 09, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-08_23_45_03-16159411720345908142
    Oct 09, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-08_23_45_03-16159411720345908142
    Oct 09, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-09T06:45:08.660Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 09, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:45:16.191Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 09, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:45:17.123Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 09, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:45:17.194Z: Expanding GroupByKey operations into optimizable parts.
    Oct 09, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:45:17.251Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 09, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:45:17.337Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 09, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:45:17.372Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 09, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:45:17.406Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 09, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:45:17.751Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 09, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:45:17.820Z: Starting 5 workers in us-central1-c...
    Oct 09, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:45:39.118Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 09, 2021 6:45:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:45:49.011Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 09, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:46:22.497Z: Workers have started successfully.
    Oct 09, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:46:22.527Z: Workers have started successfully.
    Oct 09, 2021 6:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:46:52.233Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 09, 2021 6:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:46:52.372Z: Cleaning up.
    Oct 09, 2021 6:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:46:52.432Z: Stopping worker pool...
    Oct 09, 2021 6:49:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:49:15.630Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 09, 2021 6:49:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T06:49:15.679Z: Worker pool stopped.
    Oct 09, 2021 6:49:22 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-08_23_45_03-16159411720345908142 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f3b58762-f298-45f0-b85c-7b874a147d1d and timestamp: 2021-10-09T06:49:23.030000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.023

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 09, 2021 6:49:23 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 36.25 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 6s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/p42jgv4ly4nse

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2522

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2522/display/redirect?page=changes>

Changes:

[Brian Hulette] Add DataFrame changes to CHANGES.md

[stranniknm] [BEAM-12964]: add code editor component

[stranniknm] [BEAM-12966]: load initial example on page load

[stranniknm] [BEAM-12964]: fix editor scrolling

[stranniknm] [BEAM-12964]: add dark theme support

[stranniknm] [BEAM-12964]: add licence to example repository file

[stranniknm] [BEAM-12964]: extract playground page providers

[stranniknm] [BEAM-12964]: refactor styles

[Brian Hulette] Don't use run_pytest.sh for pyarrow tests

[Brian Hulette] Fail in run_pytest.sh if -m is specified

[noreply] [BEAM-10913] - Forcing update of YAML file by running kubectl apply

[noreply] [BEAM-10955] Flink Java Runner test flake: Could not find Flink job

[noreply] [BEAM-10114] Bump Pub/Sub Lite version (#15640)


------------------------------------------
[...truncated 368.68 KB...]
Gradle Test Executor 7 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0e737832183179bf29620162a767a363
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 7'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 7'
Successfully started process 'Gradle Test Executor 7'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 09, 2021 12:48:08 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 09, 2021 12:48:09 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 09, 2021 12:48:09 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 09, 2021 12:48:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 12:48:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 12:48:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 09, 2021 12:48:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 12:48:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 12:48:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 09, 2021 12:48:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 09, 2021 12:48:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 09, 2021 12:48:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 12:48:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 09, 2021 12:48:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 09, 2021 12:48:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 12:48:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 09, 2021 12:48:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@793252937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 09, 2021 12:48:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 12:48:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 12:48:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 09, 2021 12:48:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 09, 2021 12:48:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 09, 2021 12:48:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 09, 2021 12:48:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 09, 2021 12:48:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 09, 2021 12:48:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 09, 2021 12:48:22 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 09, 2021 12:48:22 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 09, 2021 12:48:22 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3372550691481844343.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-qO8nTAgZslIfmDxxT4hKCkbGiWJ6t-61EjzOHJwnthI.jar
    Oct 09, 2021 12:48:22 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 09, 2021 12:48:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 09, 2021 12:48:23 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 5fae060ff678a2dfac04ce6f5b4a175a7dc95e72d4dd6006f7d95f963510b6e8> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-X64GD_Z4ot-sBM5vW0oXWn3JXnLU3WAG99lfljUQtug.pb
    Oct 09, 2021 12:48:25 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 09, 2021 12:48:25 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 09, 2021 12:48:25 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 09, 2021 12:48:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 09, 2021 12:48:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-08_17_48_25-12615639983825538711?project=apache-beam-testing
    Oct 09, 2021 12:48:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-08_17_48_25-12615639983825538711
    Oct 09, 2021 12:48:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-08_17_48_25-12615639983825538711
    Oct 09, 2021 12:48:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-09T00:48:30.707Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 09, 2021 12:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:48:38.374Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 09, 2021 12:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:48:39.232Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 09, 2021 12:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:48:39.274Z: Expanding GroupByKey operations into optimizable parts.
    Oct 09, 2021 12:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:48:39.300Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 09, 2021 12:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:48:39.364Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 09, 2021 12:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:48:39.390Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 09, 2021 12:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:48:39.429Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 09, 2021 12:48:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:48:39.776Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 09, 2021 12:48:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:48:39.856Z: Starting 5 workers in us-central1-c...
    Oct 09, 2021 12:48:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:48:57.796Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 09, 2021 12:49:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:49:15.072Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Oct 09, 2021 12:49:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:49:15.093Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Oct 09, 2021 12:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:49:25.327Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 09, 2021 12:49:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:49:49.706Z: Workers have started successfully.
    Oct 09, 2021 12:49:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:49:49.743Z: Workers have started successfully.
    Oct 09, 2021 12:50:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:50:18.312Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 09, 2021 12:50:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:50:18.468Z: Cleaning up.
    Oct 09, 2021 12:50:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:50:18.539Z: Stopping worker pool...
    Oct 09, 2021 12:52:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:52:36.896Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 09, 2021 12:52:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-09T00:52:36.940Z: Worker pool stopped.
    Oct 09, 2021 12:52:43 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-08_17_48_25-12615639983825538711 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f378ce38-0a90-43cf-a806-3f77f31bfec9 and timestamp: 2021-10-09T00:52:43.144000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.449

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 09, 2021 12:52:43 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 7 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 38.588 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 8m 24s
152 actionable tasks: 116 executed, 36 from cache

Publishing build scan...
https://gradle.com/s/t6cg2rbizsfpo

Stopped 6 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2521

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2521/display/redirect?page=changes>

Changes:

[noreply] fixing test link

[noreply] fix linter issues


------------------------------------------
[...truncated 338.58 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is d68735da05c09ea3738895ffcf944360
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 08, 2021 6:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 08, 2021 6:44:51 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 08, 2021 6:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 08, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 08, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 08, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 08, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 08, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 08, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 08, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 08, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 08, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 08, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 08, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 08, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 08, 2021 6:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 08, 2021 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 08, 2021 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 08, 2021 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5951849471958619263.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-LCJQNDdw31EDt8ENNwxdQqjaSYT-FEha-CXsV04IW-8.jar
    Oct 08, 2021 6:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 08, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 08, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash b26819f12f6c9f293a9986ef45abefc9296db922e2dff4cb55a9bed344fee486> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-smgZ8S9snyk6mYbvRavvySltuSLi3_TLVam-00T-5IY.pb
    Oct 08, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 08, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 08, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 08, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 08, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-08_11_45_05-6983261578214034220?project=apache-beam-testing
    Oct 08, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-08_11_45_05-6983261578214034220
    Oct 08, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-08_11_45_05-6983261578214034220
    Oct 08, 2021 6:45:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-08T18:45:08.856Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 08, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:16.868Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 08, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:17.755Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 08, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:17.809Z: Expanding GroupByKey operations into optimizable parts.
    Oct 08, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:17.847Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 08, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:17.908Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 08, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:17.939Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 08, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:17.979Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 08, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:18.381Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 08, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:18.524Z: Starting 5 workers in us-central1-c...
    Oct 08, 2021 6:45:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:46.884Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 08, 2021 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:47.445Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 08, 2021 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:47.467Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 08, 2021 6:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:45:57.714Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 08, 2021 6:46:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:46:20.651Z: Workers have started successfully.
    Oct 08, 2021 6:46:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:46:20.691Z: Workers have started successfully.
    Oct 08, 2021 6:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:46:49.095Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 08, 2021 6:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:46:49.320Z: Cleaning up.
    Oct 08, 2021 6:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:46:49.432Z: Stopping worker pool...
    Oct 08, 2021 6:49:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:49:12.188Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 08, 2021 6:49:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T18:49:12.229Z: Worker pool stopped.
    Oct 08, 2021 6:49:18 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-08_11_45_05-6983261578214034220 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6d170fa9-1d0f-473c-808b-035da0668d78 and timestamp: 2021-10-08T18:49:18.446000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.924

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 08, 2021 6:49:18 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 31.732 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 1s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ps52evv7hlsfo

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2520

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2520/display/redirect>

Changes:


------------------------------------------
[...truncated 337.76 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 031b61a29a86ea8d1e4fd08e92109785
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 08, 2021 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 08, 2021 12:44:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 08, 2021 12:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 08, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 08, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 08, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 08, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 08, 2021 12:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 08, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 08, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 08, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4968577707565766474.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-1_Mp92kvAf_eZDCa7pLVZa6jjGClDzDfo2JxSQwLOnU.jar
    Oct 08, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 08, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 08, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104891 bytes, hash 69201f95e6afaacccc0b6e43ebbc28ce994557a7d3ad2d47bb65a3cf205c333f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-aSAfleavqszMC25D67wozplFV6fTrS1Hu2WjzyBcMz8.pb
    Oct 08, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 08, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 08, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 08, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 08, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-08_05_45_06-14664077179057365781?project=apache-beam-testing
    Oct 08, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-08_05_45_06-14664077179057365781
    Oct 08, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-08_05_45_06-14664077179057365781
    Oct 08, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-08T12:45:09.777Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 08, 2021 12:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:45:16.629Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 08, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:45:17.357Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 08, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:45:17.395Z: Expanding GroupByKey operations into optimizable parts.
    Oct 08, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:45:17.423Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 08, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:45:17.496Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 08, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:45:17.527Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 08, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:45:17.562Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 08, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:45:17.937Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 08, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:45:18.023Z: Starting 5 workers in us-central1-c...
    Oct 08, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:45:22.802Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 08, 2021 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:45:51.865Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Oct 08, 2021 12:45:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:45:51.891Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Oct 08, 2021 12:46:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:46:02.157Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 08, 2021 12:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:46:22.767Z: Workers have started successfully.
    Oct 08, 2021 12:46:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:46:22.784Z: Workers have started successfully.
    Oct 08, 2021 12:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:46:53.002Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 08, 2021 12:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:46:53.154Z: Cleaning up.
    Oct 08, 2021 12:46:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:46:53.281Z: Stopping worker pool...
    Oct 08, 2021 12:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:49:21.185Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 08, 2021 12:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T12:49:21.226Z: Worker pool stopped.
    Oct 08, 2021 12:49:26 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-08_05_45_06-14664077179057365781 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED
    Load test results for test (ID): 26cd6553-0743-4d4a-be5f-fec482014207 and timestamp: 2021-10-08T12:49:26.567000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    11.505

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 08, 2021 12:49:27 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 36.981 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 7s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/k367ievlu7mtm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2519

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2519/display/redirect?page=changes>

Changes:

[Udi Meiri] Release script fixes


------------------------------------------
[...truncated 338.95 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 031b61a29a86ea8d1e4fd08e92109785
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 08, 2021 6:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 08, 2021 6:44:53 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 08, 2021 6:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 08, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 08, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 08, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 08, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 08, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 08, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 08, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 08, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 08, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 08, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 08, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 08, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 08, 2021 6:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 08, 2021 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 08, 2021 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 08, 2021 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8849610813195269412.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-lppfSxFQkpTrMDhcxPnsubAVMFl9L_sgz_BkQEn3s8E.jar
    Oct 08, 2021 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 08, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 08, 2021 6:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 87f9be0378d066805f93f79b59088bffe9ae05a1edfda9b676dd70cbdabc3225> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-h_m-A3jQZoBfk_ebWQiL_-muBaHt_am2dt1wy9q8MiU.pb
    Oct 08, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 08, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 08, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 08, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 08, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-07_23_45_05-7467560351167534370?project=apache-beam-testing
    Oct 08, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-07_23_45_05-7467560351167534370
    Oct 08, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-07_23_45_05-7467560351167534370
    Oct 08, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-08T06:45:09.070Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 08, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:45:15.595Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 08, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:45:16.271Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 08, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:45:16.311Z: Expanding GroupByKey operations into optimizable parts.
    Oct 08, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:45:16.340Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 08, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:45:16.390Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 08, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:45:16.414Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 08, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:45:16.445Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 08, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:45:16.801Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 08, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:45:16.897Z: Starting 5 workers in us-central1-c...
    Oct 08, 2021 6:45:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:45:23.817Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 08, 2021 6:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:45:52.476Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 08, 2021 6:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:45:52.526Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 08, 2021 6:46:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:46:02.823Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 08, 2021 6:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:46:27.877Z: Workers have started successfully.
    Oct 08, 2021 6:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:46:27.907Z: Workers have started successfully.
    Oct 08, 2021 6:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:46:56.830Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 08, 2021 6:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:46:56.973Z: Cleaning up.
    Oct 08, 2021 6:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:46:57.064Z: Stopping worker pool...
    Oct 08, 2021 6:49:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:49:13.333Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 08, 2021 6:49:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T06:49:13.388Z: Worker pool stopped.
    Oct 08, 2021 6:49:18 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-07_23_45_05-7467560351167534370 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c220cbbe-b40e-4fc0-944b-1e0464547291 and timestamp: 2021-10-08T06:49:18.898000000Z:
                     Metric:                    Value:
                   read_time                     8.396
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 08, 2021 6:49:19 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 30.615 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 0s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/yulpcrfjc3op4

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2518

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2518/display/redirect?page=changes>

Changes:

[noreply] Revert "[BEAM-12993] Update to Debezium 1.7.0.Final (#15636)"

[noreply] [BEAM-12769] Few fixes related to Java Class Lookup based cross-language


------------------------------------------
[...truncated 340.22 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':',5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 031b61a29a86ea8d1e4fd08e92109785
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 08, 2021 12:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 08, 2021 12:45:19 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 08, 2021 12:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 08, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 08, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 08, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 08, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 08, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 08, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 08, 2021 12:45:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 08, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 08, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 08, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2555889397093215436.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-zIVKvBbXeSgmPaO1H_VRjHDJIHN4eyEfstIOd_AXtRQ.jar
    Oct 08, 2021 12:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 08, 2021 12:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 08, 2021 12:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 2f37509368649600704f3cda080a3411812fee0625419264c5019a15c17f78d6> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-LzdQk2hklgBwTzzaCAo0EYEv7gYlQZJkxQGaFcF_eNY.pb
    Oct 08, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 08, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 08, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 08, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 08, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-07_17_45_32-15240630194847343964?project=apache-beam-testing
    Oct 08, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-07_17_45_32-15240630194847343964
    Oct 08, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-07_17_45_32-15240630194847343964
    Oct 08, 2021 12:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-08T00:45:36.025Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 08, 2021 12:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:45:43.194Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 08, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:45:43.871Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 08, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:45:43.940Z: Expanding GroupByKey operations into optimizable parts.
    Oct 08, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:45:43.971Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 08, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:45:44.054Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 08, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:45:44.094Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 08, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:45:44.120Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 08, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:45:44.534Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 08, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:45:44.671Z: Starting 5 workers in us-central1-c...
    Oct 08, 2021 12:46:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:46:14.855Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 08, 2021 12:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:46:28.464Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 08, 2021 12:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:46:53.400Z: Workers have started successfully.
    Oct 08, 2021 12:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:46:53.432Z: Workers have started successfully.
    Oct 08, 2021 12:47:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:47:21.774Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 08, 2021 12:47:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:47:21.987Z: Cleaning up.
    Oct 08, 2021 12:47:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:47:22.137Z: Stopping worker pool...
    Oct 08, 2021 12:49:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:49:44.857Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 08, 2021 12:49:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-08T00:49:44.906Z: Worker pool stopped.
    Oct 08, 2021 12:49:52 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-07_17_45_32-15240630194847343964 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 94b03e79-18c4-4bc7-b87d-e20ef1229ac1 and timestamp: 2021-10-08T00:49:52.779000000Z:
                     Metric:                    Value:
                   read_time                     9.281
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 08, 2021 12:49:53 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 38.699 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 33s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/pnfornx7rifqe

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2517

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2517/display/redirect?page=changes>

Changes:

[noreply] [BEAM-13015] Remove the overhead of SpecMonitoringInfoValidator

[noreply] Minor: Replace generic external.py links in multi-language documentation

[kawaigin] Updated screendiff integration test golden screenshots.


------------------------------------------
[...truncated 342.00 KB...]
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 031b61a29a86ea8d1e4fd08e92109785
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 07, 2021 6:49:30 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 07, 2021 6:49:31 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 07, 2021 6:49:31 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 07, 2021 6:49:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 6:49:34 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 6:49:35 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 07, 2021 6:49:35 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 6:49:35 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 6:49:35 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 07, 2021 6:49:36 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 07, 2021 6:49:36 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 07, 2021 6:49:36 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 6:49:36 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 07, 2021 6:49:36 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 07, 2021 6:49:36 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 6:49:36 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 07, 2021 6:49:37 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@793252937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 07, 2021 6:49:37 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 6:49:37 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 6:49:37 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 07, 2021 6:49:37 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 6:49:37 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 6:49:37 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 07, 2021 6:49:37 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 07, 2021 6:49:37 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 07, 2021 6:49:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 07, 2021 6:49:41 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 07, 2021 6:49:41 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-t6vcM6lWwnsubPT3Rg4ZeliJiC02dU6wVla1pmaVTMo.jar
    Oct 07, 2021 6:49:41 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6620371285689845389.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Jl0DvZDCGeVJjiJ89Td9PnX99FmVfDHkZEWAmaMfOb4.jar
    Oct 07, 2021 6:49:41 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 07, 2021 6:49:41 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 07, 2021 6:49:41 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104891 bytes, hash d626e53be9a553be32ec908c07115f0e438a48d125e3fd3cf14675da27ef69d3> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-1iblO-mlU74y7JCMBxFfDkOKSNEl4_088UZ12ifvadM.pb
    Oct 07, 2021 6:49:43 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 07, 2021 6:49:44 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 07, 2021 6:49:44 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 07, 2021 6:49:44 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 07, 2021 6:49:45 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-07_11_49_44-8045755234887827204?project=apache-beam-testing
    Oct 07, 2021 6:49:45 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-07_11_49_44-8045755234887827204
    Oct 07, 2021 6:49:45 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-07_11_49_44-8045755234887827204
    Oct 07, 2021 6:49:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-07T18:49:47.676Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 07, 2021 6:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:49:54.419Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 07, 2021 6:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:49:55.084Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 07, 2021 6:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:49:55.120Z: Expanding GroupByKey operations into optimizable parts.
    Oct 07, 2021 6:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:49:55.153Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 07, 2021 6:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:49:55.217Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 07, 2021 6:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:49:55.268Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 07, 2021 6:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:49:55.300Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 07, 2021 6:49:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:49:55.593Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 07, 2021 6:49:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:49:55.656Z: Starting 5 workers in us-central1-a...
    Oct 07, 2021 6:50:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:50:19.678Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 07, 2021 6:50:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:50:32.955Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Oct 07, 2021 6:50:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:50:32.988Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Oct 07, 2021 6:50:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:50:43.245Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 07, 2021 6:51:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:51:07.455Z: Workers have started successfully.
    Oct 07, 2021 6:51:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:51:07.485Z: Workers have started successfully.
    Oct 07, 2021 6:51:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:51:33.555Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 07, 2021 6:51:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:51:33.685Z: Cleaning up.
    Oct 07, 2021 6:51:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:51:33.757Z: Stopping worker pool...
    Oct 07, 2021 6:54:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:53:58.752Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 07, 2021 6:54:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T18:53:58.795Z: Worker pool stopped.
    Oct 07, 2021 6:54:03 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-07_11_49_44-8045755234887827204 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 7be44caf-f1ee-4388-ae97-a48e2aab0939 and timestamp: 2021-10-07T18:54:03.966000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.878

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 07, 2021 6:54:04 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 37.459 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 27s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/mkufw5o2a6ps6

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2516

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2516/display/redirect>

Changes:


------------------------------------------
[...truncated 337.72 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 2a0449772da4ccb76cdfc954d2c4c73f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 07, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 07, 2021 12:44:55 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 07, 2021 12:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 07, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 07, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 07, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 07, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 07, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 07, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 07, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 07, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 07, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 07, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7672760343491070526.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-0AqosBYPr7A4cZyEECcieDmxYVkNmN6SWWlTwx80DW0.jar
    Oct 07, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mongodb/mongo-java-driver/3.12.7/1f45c6a397feeb46da75425619333d1cc6f90f78/mongo-java-driver-3.12.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/mongo-java-driver-3.12.7-D_zgBJWNb9mzmuetJ37a0X9XtpcfSGsXYpxe6eE8Tao.jar
    Oct 07, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Oct 07, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 07, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash c817d4a4b3e9441df5a809d7242b495b8772e2cd04ab217c8006c69f5414571f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-yBfUpLPpRB31qAnXJCtJW4dy4s0EqyF8gAbGn1QUVx8.pb
    Oct 07, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 07, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 07, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 07, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 07, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-07_05_45_08-10394073095498599519?project=apache-beam-testing
    Oct 07, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-07_05_45_08-10394073095498599519
    Oct 07, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-07_05_45_08-10394073095498599519
    Oct 07, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-07T12:45:13.773Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 07, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:45:21.097Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 07, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:45:21.882Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 07, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:45:21.925Z: Expanding GroupByKey operations into optimizable parts.
    Oct 07, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:45:21.955Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 07, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:45:22.025Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 07, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:45:22.055Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 07, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:45:22.089Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 07, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:45:22.442Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 07, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:45:22.555Z: Starting 5 workers in us-central1-c...
    Oct 07, 2021 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:45:40.896Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 07, 2021 12:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:45:58.920Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Oct 07, 2021 12:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:45:58.960Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Oct 07, 2021 12:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:46:09.223Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 07, 2021 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:46:33.006Z: Workers have started successfully.
    Oct 07, 2021 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:46:33.032Z: Workers have started successfully.
    Oct 07, 2021 12:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:47:02.855Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 07, 2021 12:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:47:03.059Z: Cleaning up.
    Oct 07, 2021 12:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:47:03.174Z: Stopping worker pool...
    Oct 07, 2021 12:49:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:49:26.217Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 07, 2021 12:49:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T12:49:26.281Z: Worker pool stopped.
    Oct 07, 2021 12:49:31 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-07_05_45_08-10394073095498599519 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6e527e5f-1aa7-4e38-b4f8-6a4e5071e8f0 and timestamp: 2021-10-07T12:49:31.491000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.822

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 07, 2021 12:49:31 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 40.306 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 12s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/b4yfzyadkldzm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2515

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2515/display/redirect?page=changes>

Changes:

[david.prieto] [BEAM-12950] Not delete orphaned files to avoid missing events

[david.prieto] [BEAM-12950] Add Bug fix description to CHANGES.md

[david.prieto] [BEAM-12950] fix linter issues

[david.prieto] [BEAN-12950] Skip unit test


------------------------------------------
[...truncated 349.43 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 2a0449772da4ccb76cdfc954d2c4c73f
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 4'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 4'
Successfully started process 'Gradle Test Executor 4'

Gradle Test Executor 4 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 07, 2021 6:46:10 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 07, 2021 6:46:11 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 07, 2021 6:46:11 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 07, 2021 6:46:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 6:46:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 6:46:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 07, 2021 6:46:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 6:46:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 6:46:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 07, 2021 6:46:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@793252937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 6:46:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 07, 2021 6:46:17 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 07, 2021 6:46:17 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 07, 2021 6:46:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 07, 2021 6:46:21 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 07, 2021 6:46:21 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 07, 2021 6:46:21 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5720442946869724837.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-fOJgyedDSivd487OB6ea0m91XAucy7BcTxFfRD88Ksw.jar
    Oct 07, 2021 6:46:22 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/google-cloud-bigtable-emulator/0.137.1/b1639aa134de1302e43d9e9c3843f6ff853c510f/google-cloud-bigtable-emulator-0.137.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-cloud-bigtable-emulator-0.137.1-V2SlLU4BjIGf5QoF7YL-A-QNjw49NdXrNM4QoX9MXSI.jar
    Oct 07, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 2 seconds
    Oct 07, 2021 6:46:24 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 07, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 276fb88503a5f9bc2605353dd1d58a0cef2327ac6845a1f630de511809dd6b42> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-J2-4hQOl-bwmBTU90dWKDO8jJ6xoRaH2MN5RGAnda0I.pb
    Oct 07, 2021 6:46:26 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 07, 2021 6:46:26 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 07, 2021 6:46:26 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 07, 2021 6:46:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 07, 2021 6:46:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-06_23_46_26-8236915367545468161?project=apache-beam-testing
    Oct 07, 2021 6:46:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-06_23_46_26-8236915367545468161
    Oct 07, 2021 6:46:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-06_23_46_26-8236915367545468161
    Oct 07, 2021 6:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-07T06:46:29.807Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 07, 2021 6:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:46:37.257Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 07, 2021 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:46:38.194Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 07, 2021 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:46:38.227Z: Expanding GroupByKey operations into optimizable parts.
    Oct 07, 2021 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:46:38.263Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 07, 2021 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:46:38.335Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 07, 2021 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:46:38.362Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 07, 2021 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:46:38.394Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 07, 2021 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:46:38.737Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 07, 2021 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:46:38.794Z: Starting 5 workers in us-central1-c...
    Oct 07, 2021 6:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:46:55.147Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 07, 2021 6:47:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:47:18.969Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 07, 2021 6:47:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:47:44.306Z: Workers have started successfully.
    Oct 07, 2021 6:47:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:47:44.342Z: Workers have started successfully.
    Oct 07, 2021 6:48:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:48:13.342Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 07, 2021 6:48:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:48:13.478Z: Cleaning up.
    Oct 07, 2021 6:48:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:48:13.560Z: Stopping worker pool...
    Oct 07, 2021 6:50:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:50:36.653Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 07, 2021 6:50:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T06:50:36.706Z: Worker pool stopped.
    Oct 07, 2021 6:50:42 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-06_23_46_26-8236915367545468161 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6dd3dbd4-3dfe-4a18-b85b-369cd13c4e57 and timestamp: 2021-10-07T06:50:42.731000000Z:
                     Metric:                    Value:
                   read_time                     8.653
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 07, 2021 6:50:43 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 4 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 37.67 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 25s
152 actionable tasks: 104 executed, 48 from cache

Publishing build scan...
https://gradle.com/s/aijag52hzjdly

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2514

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2514/display/redirect?page=changes>

Changes:

[stefan.istrate] Fix "too many pings" errors.

[stefan.istrate] Increase keepalive timeout to 5 minutes.

[Robert Bradshaw] Dead letter option.

[Robert Bradshaw] Guard setup.py logic with __main__ condition.

[stefan.istrate] Fix yapf complaints.

[aydar.zaynutdinov] [BEAM-12969] [Playground]

[Robert Bradshaw] Add the ability to use subprocesses with the dead letter queue.

[Robert Bradshaw] Support multi-output DoFns.

[noreply] Merge pull request #15510 from [BEAM-12883] Add coder for

[Robert Bradshaw] multi-output fix

[Robert Bradshaw] Add thresholding to dead letter pattern.

[Robert Bradshaw] treshold test fixes

[Robert Bradshaw] Better naming, documentation.

[noreply] [BEAM-12482] Ensure that we ignore schema update options when loading

[Kyle Weaver] Moving to 2.35.0-SNAPSHOT on master branch.

[Kyle Weaver] Add 2.35.0 section to changelog.

[noreply] Fix email links in the contact page


------------------------------------------
[...truncated 349.20 KB...]
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 07, 2021 12:52:07 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 07, 2021 12:52:08 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 07, 2021 12:52:08 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 07, 2021 12:52:11 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 12:52:11 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 12:52:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 07, 2021 12:52:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 12:52:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 12:52:12 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 07, 2021 12:52:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 07, 2021 12:52:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 07, 2021 12:52:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 12:52:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 07, 2021 12:52:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 07, 2021 12:52:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 12:52:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 07, 2021 12:52:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 07, 2021 12:52:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 12:52:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 12:52:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 07, 2021 12:52:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 07, 2021 12:52:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 07, 2021 12:52:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 07, 2021 12:52:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 07, 2021 12:52:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 07, 2021 12:52:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.35.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-ZgoREHN3SqOQclIGldjYWlJZYlnicfJ-w-LJBzxpP7U.jar
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT-h5Vm59YQQa-RcmHCCG09Iw6skDekxZqFrPsGtnvUHB4.jar
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.35.0-SNAPSHOT-tests-Lg8Zg7us2s6VPtPB7f5WrJKFl5dhDd82wMfxmGkALkE.jar
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3661911375608363829.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Glt5vUeHYyRGwrJxMcjzgrP8jbhpSsVp0FNZwV7fzoo.jar
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.35.0-SNAPSHOT-tests-__o_cT2WEL8WknZtwTW8aQevksYt5UgSYWmasm174nQ.jar
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/join-library/build/libs/beam-sdks-java-extensions-join-library-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-join-library-2.35.0-SNAPSHOT-xz-aSlhizJi8jigGo3amZ2g_MkglE3dBulicsi2UX5U.jar
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT-4bF8QpaxkWlkdPssX5OV0S9rIzFuRU0pojrN2fBwjjk.jar
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.35.0-SNAPSHOT-tests-I-osQrG-lSEES6RD4ibuovIeVl-_U3nOEEsBLo1rYrA.jar
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/udf/build/libs/beam-sdks-java-extensions-sql-udf-2.35.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-udf-2.35.0-SNAPSHOT-MlQQAS1Je1cH4lXtIOKM3zL6o97S5F7sJQ3zUailw08.jar
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 241 files cached, 9 files newly uploaded in 0 seconds
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 07, 2021 12:52:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 2cc5f020f31bc53b80cca906bd659a4b383ab008490887abdca951b75fcb9b02> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-LMXwIPMbxTuAzKkGvWWaSzg6sAhJCIer3KlRt1_LmwI.pb
    Oct 07, 2021 12:52:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 07, 2021 12:52:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 07, 2021 12:52:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 07, 2021 12:52:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.35.0-SNAPSHOT
    Oct 07, 2021 12:52:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-06_17_52_21-9042450593588474239?project=apache-beam-testing
    Oct 07, 2021 12:52:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-06_17_52_21-9042450593588474239
    Oct 07, 2021 12:52:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-06_17_52_21-9042450593588474239
    Oct 07, 2021 12:52:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-07T00:52:25.042Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 07, 2021 12:52:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:52:35.558Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 07, 2021 12:52:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:52:36.543Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 07, 2021 12:52:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:52:36.610Z: Expanding GroupByKey operations into optimizable parts.
    Oct 07, 2021 12:52:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:52:36.646Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 07, 2021 12:52:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:52:36.716Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 07, 2021 12:52:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:52:36.749Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 07, 2021 12:52:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:52:36.804Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 07, 2021 12:52:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:52:37.205Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 07, 2021 12:52:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:52:37.284Z: Starting 5 workers in us-central1-c...
    Oct 07, 2021 12:52:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:52:50.577Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 07, 2021 12:53:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:53:22.376Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 07, 2021 12:53:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:53:48.961Z: Workers have started successfully.
    Oct 07, 2021 12:53:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:53:48.990Z: Workers have started successfully.
    Oct 07, 2021 12:54:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:54:18.283Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 07, 2021 12:54:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:54:18.547Z: Cleaning up.
    Oct 07, 2021 12:54:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:54:18.639Z: Stopping worker pool...
    Oct 07, 2021 12:56:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:56:47.140Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 07, 2021 12:56:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-07T00:56:47.198Z: Worker pool stopped.
    Oct 07, 2021 12:56:53 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-06_17_52_21-9042450593588474239 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): fecbac54-db59-483e-a95e-d486ca08c0d6 and timestamp: 2021-10-07T00:56:53.817000000Z:
                     Metric:                    Value:
                   read_time                     8.459
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 07, 2021 12:56:54 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 50.461 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 12s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/arjb4enc6ighm

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2513

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2513/display/redirect?page=changes>

Changes:

[Robert Bradshaw] Restore "Default to Runner v2 for Python Streaming jobs. (#15140)"

[Robert Bradshaw] Avoid incompatible setting.

[noreply] [BEAM-12909][BEAM-12849]  Add support for running spark3 nexmark queries


------------------------------------------
[...truncated 344.47 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c4761723fbcbd113bf133425c2d1ed59
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 06, 2021 6:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 06, 2021 6:45:23 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 06, 2021 6:45:24 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 06, 2021 6:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 6:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 6:45:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 06, 2021 6:45:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 6:45:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 6:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 06, 2021 6:45:28 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 06, 2021 6:45:29 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 06, 2021 6:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 06, 2021 6:45:34 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 06, 2021 6:45:34 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 06, 2021 6:45:34 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test271684441090064629.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-S1tvsTxoMyC9bA_SXB-_80Taz36u53aDF2cA4oIbfbE.jar
    Oct 06, 2021 6:45:35 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 06, 2021 6:45:35 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 06, 2021 6:45:35 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 0c6914d994f39b550a3132e4e4ca6f872bcd01899e9b6c0a5cf8d03f1e5a1ea5> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-DGkU2ZTzm1UKMTLk5MpvhyvNAYmem2wKXPjQPx5aHqU.pb
    Oct 06, 2021 6:45:37 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 06, 2021 6:45:37 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 06, 2021 6:45:37 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 06, 2021 6:45:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 06, 2021 6:45:38 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-06_11_45_37-4958870658299751609?project=apache-beam-testing
    Oct 06, 2021 6:45:38 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-06_11_45_37-4958870658299751609
    Oct 06, 2021 6:45:38 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-06_11_45_37-4958870658299751609
    Oct 06, 2021 6:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-06T18:45:41.035Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 06, 2021 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:45:47.718Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 06, 2021 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:45:49.023Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 06, 2021 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:45:49.055Z: Expanding GroupByKey operations into optimizable parts.
    Oct 06, 2021 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:45:49.080Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 06, 2021 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:45:49.161Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 06, 2021 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:45:49.193Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 06, 2021 6:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:45:49.223Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 06, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:45:49.549Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 06, 2021 6:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:45:49.615Z: Starting 5 workers in us-central1-c...
    Oct 06, 2021 6:46:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:46:15.626Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 06, 2021 6:46:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:46:38.054Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 06, 2021 6:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:47:04.178Z: Workers have started successfully.
    Oct 06, 2021 6:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:47:04.219Z: Workers have started successfully.
    Oct 06, 2021 6:47:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:47:33.910Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 06, 2021 6:47:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:47:34.116Z: Cleaning up.
    Oct 06, 2021 6:47:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:47:34.217Z: Stopping worker pool...
    Oct 06, 2021 6:49:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:49:52.921Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 06, 2021 6:49:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T18:49:52.980Z: Worker pool stopped.
    Oct 06, 2021 6:49:59 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-06_11_45_37-4958870658299751609 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): e260e0c1-d250-44b7-aa11-903d6bed5bf5 and timestamp: 2021-10-06T18:49:59.289000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.493

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 06, 2021 6:49:59 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 41.25 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 38s
152 actionable tasks: 101 executed, 51 from cache

Publishing build scan...
https://gradle.com/s/c3bggcoowgbao

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2512

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2512/display/redirect>

Changes:


------------------------------------------
[...truncated 336.83 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 6793a9128f34eef6e236f990f5b36a91
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 06, 2021 12:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 06, 2021 12:44:51 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 06, 2021 12:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 06, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 06, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 06, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 06, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 06, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 06, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 06, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 06, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 06, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 06, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 06, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 06, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 06, 2021 12:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 06, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 06, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 06, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7996053131578164699.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-cIDK8C9p7FYllt4Tz1Fm4_Qi2rZjBEi8elnaJdsXbm0.jar
    Oct 06, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 06, 2021 12:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 06, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash a80ee476681df5da9f2022da40624e9f50f5de28a5d5a8d5a48a17d188fa3c4d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-qA7kdmgd9dqfICLaQGJOn1D13iil1ajVpIoX0Yj6PE0.pb
    Oct 06, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 06, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 06, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 06, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 06, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-06_05_45_03-8741306478298111401?project=apache-beam-testing
    Oct 06, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-06_05_45_03-8741306478298111401
    Oct 06, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-06_05_45_03-8741306478298111401
    Oct 06, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-06T12:45:06.681Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 06, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:45:13.434Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 06, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:45:14.058Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 06, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:45:14.100Z: Expanding GroupByKey operations into optimizable parts.
    Oct 06, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:45:14.142Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 06, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:45:14.218Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 06, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:45:14.245Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 06, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:45:14.277Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 06, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:45:14.650Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 06, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:45:14.746Z: Starting 5 workers in us-central1-c...
    Oct 06, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:45:21.604Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 06, 2021 12:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:45:53.905Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 06, 2021 12:46:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:46:18.004Z: Workers have started successfully.
    Oct 06, 2021 12:46:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:46:18.039Z: Workers have started successfully.
    Oct 06, 2021 12:46:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:46:44.684Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 06, 2021 12:46:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:46:44.860Z: Cleaning up.
    Oct 06, 2021 12:46:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:46:44.987Z: Stopping worker pool...
    Oct 06, 2021 12:49:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:48:59.467Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 06, 2021 12:49:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T12:48:59.544Z: Worker pool stopped.
    Oct 06, 2021 12:49:05 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-06_05_45_03-8741306478298111401 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 35cf7617-2b4f-4671-ade3-fffbe82e37ef and timestamp: 2021-10-06T12:49:05.587000000Z:
                     Metric:                    Value:
                   read_time                     7.461
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 06, 2021 12:49:06 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 18.959 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 4m 48s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/w3sbgdw5q6ji6

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2511

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2511/display/redirect?page=changes>

Changes:

[noreply] Merge pull request #15602 from [BEAM-10917] Add support for BigQuery


------------------------------------------
[...truncated 339.81 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 6793a9128f34eef6e236f990f5b36a91
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 06, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 06, 2021 6:45:14 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 06, 2021 6:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 06, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 06, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 06, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@415755770]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 06, 2021 6:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 06, 2021 6:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 6:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 06, 2021 6:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 06, 2021 6:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 6:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 06, 2021 6:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@193331377]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 06, 2021 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 6:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 06, 2021 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 6:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 06, 2021 6:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 06, 2021 6:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 06, 2021 6:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 06, 2021 6:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 06, 2021 6:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 06, 2021 6:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7509582736706140335.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-1M2W-FTlaVjjYFU-04baqCJsRh706SQMfQITWxLcTGk.jar
    Oct 06, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 06, 2021 6:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 06, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 133be6250525ccddfac64adec29fbb5ff4c5f4c589c6f3c039486c04f0668db3> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-EzvmJQUlzN36xkrewp-7X_TF9MWJxvPAOUhsBPBmjbM.pb
    Oct 06, 2021 6:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 06, 2021 6:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 06, 2021 6:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 06, 2021 6:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 06, 2021 6:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-05_23_45_33-10813076697480166768?project=apache-beam-testing
    Oct 06, 2021 6:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-05_23_45_33-10813076697480166768
    Oct 06, 2021 6:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-05_23_45_33-10813076697480166768
    Oct 06, 2021 6:45:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-06T06:45:36.315Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 06, 2021 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:45:42.248Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 06, 2021 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:45:42.985Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 06, 2021 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:45:43.045Z: Expanding GroupByKey operations into optimizable parts.
    Oct 06, 2021 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:45:43.073Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 06, 2021 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:45:43.147Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 06, 2021 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:45:43.182Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 06, 2021 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:45:43.203Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 06, 2021 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:45:43.565Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 06, 2021 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:45:43.643Z: Starting 5 workers in us-central1-c...
    Oct 06, 2021 6:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:45:51.896Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 06, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:46:27.982Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 06, 2021 6:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:46:53.581Z: Workers have started successfully.
    Oct 06, 2021 6:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:46:53.634Z: Workers have started successfully.
    Oct 06, 2021 6:47:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:47:25.072Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 06, 2021 6:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:47:25.332Z: Cleaning up.
    Oct 06, 2021 6:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:47:25.437Z: Stopping worker pool...
    Oct 06, 2021 6:49:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:49:43.950Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 06, 2021 6:49:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T06:49:44.022Z: Worker pool stopped.
    Oct 06, 2021 6:49:50 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-05_23_45_33-10813076697480166768 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 7d50295c-1eaf-4146-8046-05026846ba4a and timestamp: 2021-10-06T06:49:51.030000000Z:
                     Metric:                    Value:
                   read_time                      8.51
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 06, 2021 6:49:51 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 43.117 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 31s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/4hswe7wmgbayy

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2510

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2510/display/redirect?page=changes>

Changes:

[noreply] [BEAM-11831] Parially Revert "[BEAM-11805] Replace user-agent for

[kawaigin] [BEAM-10708] Enable submit beam_sql built jobs to Dataflow


------------------------------------------
[...truncated 342.77 KB...]
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 6793a9128f34eef6e236f990f5b36a91
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 06, 2021 12:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 06, 2021 12:45:19 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 06, 2021 12:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 06, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 06, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 06, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 06, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 06, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 06, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 06, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 06, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 06, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 06, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 06, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 06, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 06, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 06, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 06, 2021 12:45:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 06, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 06, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 06, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test265651886755238909.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-xzJi6obHPmtYTjhps4MRyikZ1fKeBXN5TBEfa7UQnNI.jar
    Oct 06, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 06, 2021 12:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 06, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 76a702490cc4c2876299bf32ebc6cf89638a6c8e5c338dc20679ca9b86edb72a> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-dqcCSQzEwodimb8y68bPiWOKbI5cM43CBnnKm4bttyo.pb
    Oct 06, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 06, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 06, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 06, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 06, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-05_17_45_32-14845024658545748933?project=apache-beam-testing
    Oct 06, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-05_17_45_32-14845024658545748933
    Oct 06, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-05_17_45_32-14845024658545748933
    Oct 06, 2021 12:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-06T00:45:35.635Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 06, 2021 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:45:41.176Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 06, 2021 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:45:41.977Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 06, 2021 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:45:42.027Z: Expanding GroupByKey operations into optimizable parts.
    Oct 06, 2021 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:45:42.104Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 06, 2021 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:45:42.181Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 06, 2021 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:45:42.224Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 06, 2021 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:45:42.259Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 06, 2021 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:45:42.689Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 06, 2021 12:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:45:42.788Z: Starting 5 workers in us-central1-a...
    Oct 06, 2021 12:45:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:45:50.009Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 06, 2021 12:46:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:46:18.798Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Oct 06, 2021 12:46:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:46:18.840Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Oct 06, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:46:29.199Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 06, 2021 12:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:46:51.921Z: Workers have started successfully.
    Oct 06, 2021 12:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:46:51.961Z: Workers have started successfully.
    Oct 06, 2021 12:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:47:18.453Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 06, 2021 12:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:47:18.600Z: Cleaning up.
    Oct 06, 2021 12:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:47:18.679Z: Stopping worker pool...
    Oct 06, 2021 12:49:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:49:39.010Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 06, 2021 12:49:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-06T00:49:39.054Z: Worker pool stopped.
    Oct 06, 2021 12:49:45 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-05_17_45_32-14845024658545748933 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): bb49b25e-82a3-4aff-b301-a0c9f02e9eac and timestamp: 2021-10-06T00:49:45.460000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.883

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 06, 2021 12:49:45 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 31.775 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 26s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/hzyumsnlojkwu

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2509

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2509/display/redirect?page=changes>

Changes:

[noreply] Only stage the single (fat) jar when auto-starting expansion service.

[noreply] Disable samza counters (#15659)

[noreply] Merge pull request #15614 from [BEAM-12953] [Playground] Create protobuf


------------------------------------------
[...truncated 338.35 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 4da0af97686d2574b64cfb59e094a515
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 05, 2021 6:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 05, 2021 6:44:53 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 05, 2021 6:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 05, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 05, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 05, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 05, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 05, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 05, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 05, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 05, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 05, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 05, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 05, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 05, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 05, 2021 6:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 05, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 05, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 05, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5480627460130910001.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-gexQFT7luw6fjnkX6r1qRcNB99zZRM4fAmsKNQZX4zM.jar
    Oct 05, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 05, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 05, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 7fd0ee2c45d319bb9d2963ba7d8ed7fb8d5e7956f6036b63e9214efe0dd9a35f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-f9DuLEXTGbudKWO6fY7X-41eeVb2A2tj6SFO_g3Zo18.pb
    Oct 05, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 05, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 05, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 05, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 05, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-05_11_45_07-6619070403620490252?project=apache-beam-testing
    Oct 05, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-05_11_45_07-6619070403620490252
    Oct 05, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-05_11_45_07-6619070403620490252
    Oct 05, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-05T18:45:11.030Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 05, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:45:19.658Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 05, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:45:20.471Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 05, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:45:20.513Z: Expanding GroupByKey operations into optimizable parts.
    Oct 05, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:45:20.538Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 05, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:45:20.601Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 05, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:45:20.632Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 05, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:45:20.660Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 05, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:45:21.077Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 05, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:45:21.160Z: Starting 5 workers in us-central1-c...
    Oct 05, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:45:24.042Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 05, 2021 6:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:45:52.736Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 05, 2021 6:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:45:52.816Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 05, 2021 6:46:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:46:03.089Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 05, 2021 6:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:46:27.595Z: Workers have started successfully.
    Oct 05, 2021 6:46:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:46:27.628Z: Workers have started successfully.
    Oct 05, 2021 6:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:46:59.431Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 05, 2021 6:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:46:59.609Z: Cleaning up.
    Oct 05, 2021 6:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:46:59.696Z: Stopping worker pool...
    Oct 05, 2021 6:49:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:49:20.195Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 05, 2021 6:49:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T18:49:20.306Z: Worker pool stopped.
    Oct 05, 2021 6:49:26 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-05_11_45_07-6619070403620490252 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 9d7f9fdd-656d-4391-84f0-482b42536868 and timestamp: 2021-10-05T18:49:26.334000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.483

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 05, 2021 6:49:26 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 37.66 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 8s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/liugw37s2jtry

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2508

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2508/display/redirect>

Changes:


------------------------------------------
[...truncated 340.63 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':',5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0363f80a40f7f671425e1f90539807f1
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 05, 2021 12:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 05, 2021 12:45:16 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 05, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 05, 2021 12:45:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 12:45:19 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 12:45:20 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 05, 2021 12:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 12:45:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 12:45:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 05, 2021 12:45:21 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 05, 2021 12:45:22 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 05, 2021 12:45:23 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 05, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 05, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 05, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5893968782445842855.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-m8M_J_ys9Ux59DAoW_KcMKuRMM3_iQ3XvUH9zOOWMDc.jar
    Oct 05, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 05, 2021 12:45:26 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 05, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 8c05ef4335ada3ce545c421f978e10b862601d93f2bbccd699d6f1c59e0e06af> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-jAXvQzWto85UXEIfl44QuGJgHZPyu8zWmdbxxZ4OBq8.pb
    Oct 05, 2021 12:45:29 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 05, 2021 12:45:29 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 05, 2021 12:45:29 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 05, 2021 12:45:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 05, 2021 12:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-05_05_45_29-8122112085929557922?project=apache-beam-testing
    Oct 05, 2021 12:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-05_05_45_29-8122112085929557922
    Oct 05, 2021 12:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-05_05_45_29-8122112085929557922
    Oct 05, 2021 12:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-05T12:45:32.765Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 05, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:45:41.360Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 05, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:45:42.213Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 05, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:45:42.260Z: Expanding GroupByKey operations into optimizable parts.
    Oct 05, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:45:42.291Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 05, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:45:42.368Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 05, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:45:42.399Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 05, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:45:42.431Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 05, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:45:42.856Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 05, 2021 12:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:45:42.935Z: Starting 5 workers in us-central1-c...
    Oct 05, 2021 12:46:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:46:06.203Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 05, 2021 12:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:46:23.506Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 05, 2021 12:46:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:46:49.873Z: Workers have started successfully.
    Oct 05, 2021 12:46:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:46:49.906Z: Workers have started successfully.
    Oct 05, 2021 12:47:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:47:18.939Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 05, 2021 12:47:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:47:19.123Z: Cleaning up.
    Oct 05, 2021 12:47:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:47:19.220Z: Stopping worker pool...
    Oct 05, 2021 12:49:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:49:42.684Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 05, 2021 12:49:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T12:49:42.811Z: Worker pool stopped.
    Oct 05, 2021 12:49:48 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-05_05_45_29-8122112085929557922 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 7b4ddd42-b28f-4078-a4f0-b3d53bf5eeb1 and timestamp: 2021-10-05T12:49:48.267000000Z:
                     Metric:                    Value:
                   read_time                     7.253
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 05, 2021 12:49:48 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 36.56 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 28s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/fzj654rglq6go

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2507

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2507/display/redirect?page=changes>

Changes:

[danthev] Switch hintNumWorkers to ValueProvider, switch firstInstant to side


------------------------------------------
[...truncated 340.85 KB...]
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 0363f80a40f7f671425e1f90539807f1
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 05, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 05, 2021 6:45:17 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 05, 2021 6:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 05, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 05, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 05, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 6:45:22 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 05, 2021 6:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 05, 2021 6:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 05, 2021 6:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 05, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 05, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 05, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3677606633200463572.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-J0qIFCndQlEv1OSRlUg532mtw9zW0K-tdB3VO1f8VDM.jar
    Oct 05, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 05, 2021 6:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 05, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104891 bytes, hash f1893edfcc0c9153f4c1ed74317021930e8f7bebc716d5de610c0b2bad231f6d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-8Yk-38wMkVP0we10MXAhkw6Pe-vHFtXeYQwLK60jH20.pb
    Oct 05, 2021 6:45:30 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 05, 2021 6:45:30 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 05, 2021 6:45:30 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 05, 2021 6:45:30 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 05, 2021 6:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-04_23_45_30-8067968371807703600?project=apache-beam-testing
    Oct 05, 2021 6:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-04_23_45_30-8067968371807703600
    Oct 05, 2021 6:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-04_23_45_30-8067968371807703600
    Oct 05, 2021 6:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-05T06:45:33.916Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 05, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:45:39.500Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 05, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:45:40.214Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 05, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:45:40.243Z: Expanding GroupByKey operations into optimizable parts.
    Oct 05, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:45:40.273Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 05, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:45:40.324Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 05, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:45:40.350Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 05, 2021 6:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:45:40.372Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 05, 2021 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:45:40.712Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 05, 2021 6:45:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:45:40.777Z: Starting 5 workers in us-central1-a...
    Oct 05, 2021 6:46:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:46:11.862Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 05, 2021 6:46:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:46:12.737Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Oct 05, 2021 6:46:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:46:12.764Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Oct 05, 2021 6:46:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:46:23.161Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 05, 2021 6:46:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:46:46.637Z: Workers have started successfully.
    Oct 05, 2021 6:46:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:46:46.668Z: Workers have started successfully.
    Oct 05, 2021 6:47:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:47:15.236Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 05, 2021 6:47:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:47:15.383Z: Cleaning up.
    Oct 05, 2021 6:47:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:47:15.455Z: Stopping worker pool...
    Oct 05, 2021 6:49:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:49:49.190Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 05, 2021 6:49:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T06:49:49.236Z: Worker pool stopped.
    Oct 05, 2021 6:49:54 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-04_23_45_30-8067968371807703600 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 85992a32-d12e-4eea-9898-f4c3c314fa83 and timestamp: 2021-10-05T06:49:54.576000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.463

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 05, 2021 6:49:54 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 42.1 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 32s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/254skpo3o3wta

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2506

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2506/display/redirect?page=changes>

Changes:

[noreply] Merge pull request #15618 from [BEAM-12968] [Playground] Create README

[noreply] Merge pull request #15626 from [BEAM-12963] [Playground] Create base

[noreply] Merge pull request #15566 from [BEAM-12925] Correct behavior retrieving

[noreply] [BEAM-12911] Update schema translation (Java, Python) to log more

[noreply] Update CHANGES.md for JdbcIO breaking change (#15651)

[noreply] [BEAM-12513] Add Go SDK metrics content to BPG. (#15650)

[noreply] [BEAM-12996] Improve Error Logging in ConfigBuilder (#15646)

[noreply] [BEAM-13000] Disable Reshuffle Translation in Samza Portable Mode


------------------------------------------
[...truncated 344.11 KB...]
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 05, 2021 12:55:35 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 05, 2021 12:55:37 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 05, 2021 12:55:38 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 05, 2021 12:55:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 12:55:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 12:55:45 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 05, 2021 12:55:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 12:55:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 12:55:46 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 05, 2021 12:55:46 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 05, 2021 12:55:47 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 05, 2021 12:55:47 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 12:55:47 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 05, 2021 12:55:47 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 05, 2021 12:55:47 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 12:55:47 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 05, 2021 12:55:47 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1841175909]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 05, 2021 12:55:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 12:55:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 12:55:48 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 05, 2021 12:55:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 05, 2021 12:55:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 05, 2021 12:55:48 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 05, 2021 12:55:48 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 05, 2021 12:55:48 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 05, 2021 12:55:48 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 05, 2021 12:55:55 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 05, 2021 12:55:55 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 05, 2021 12:55:56 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gradle-worker-LQIYNIp1aM1u2157NJfFX8iQNiKTt9bu1VkD5-famiI.jar
    Oct 05, 2021 12:55:56 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2905102385829149055.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-qry4rld43sBtAEmV2g9xQeuPtvgzWKjkZgDWwNDH-74.jar
    Oct 05, 2021 12:55:58 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 2 seconds
    Oct 05, 2021 12:55:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 05, 2021 12:55:58 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash ec34d17a8ce4cad13c8ee40471eaa78b94eed0c533288b90b67ce9e72adf42bc> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-7DTReozkytE8juQEceqni5Tu0MUzKIuQtnzp5yrfQrw.pb
    Oct 05, 2021 12:56:01 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 05, 2021 12:56:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 05, 2021 12:56:02 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 05, 2021 12:56:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 05, 2021 12:56:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-04_17_56_02-1799180745036754217?project=apache-beam-testing
    Oct 05, 2021 12:56:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-04_17_56_02-1799180745036754217
    Oct 05, 2021 12:56:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-04_17_56_02-1799180745036754217
    Oct 05, 2021 12:56:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-05T00:56:06.569Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 05, 2021 12:56:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:56:14.817Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 05, 2021 12:56:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:56:15.692Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 05, 2021 12:56:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:56:15.721Z: Expanding GroupByKey operations into optimizable parts.
    Oct 05, 2021 12:56:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:56:15.738Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 05, 2021 12:56:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:56:15.790Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 05, 2021 12:56:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:56:15.818Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 05, 2021 12:56:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:56:15.857Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 05, 2021 12:56:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:56:16.211Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 05, 2021 12:56:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:56:16.308Z: Starting 5 workers in us-central1-a...
    Oct 05, 2021 12:56:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:56:48.383Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 05, 2021 12:56:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:56:54.666Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Oct 05, 2021 12:56:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:56:54.700Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Oct 05, 2021 12:57:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:57:05.000Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 05, 2021 12:57:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:57:28.636Z: Workers have started successfully.
    Oct 05, 2021 12:57:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:57:28.664Z: Workers have started successfully.
    Oct 05, 2021 12:57:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:57:58.211Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 05, 2021 12:57:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:57:58.350Z: Cleaning up.
    Oct 05, 2021 12:57:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T00:57:58.430Z: Stopping worker pool...
    Oct 05, 2021 1:00:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T01:00:22.209Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 05, 2021 1:00:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-05T01:00:22.254Z: Worker pool stopped.
    Oct 05, 2021 1:00:29 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-04_17_56_02-1799180745036754217 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f0f84e75-81fd-42c7-81a1-abfc1b6ee917 and timestamp: 2021-10-05T01:00:29.759000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.162


Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 05, 2021 1:00:30 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

3 tests completed, 2 failed
Finished generating test XML results (0.041 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.074 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 5 mins 0.158 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 36s
152 actionable tasks: 101 executed, 51 from cache

Publishing build scan...
https://gradle.com/s/6wpswrc3eggoq

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2505

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2505/display/redirect?page=changes>

Changes:

[noreply] [BEAM-11516] Upgrade to pylint 2.11.1, fix warnings (#15612)

[noreply] [BEAM-12979, BEAM-11097] Change cache to store ReStreams, clean up to…

[noreply] [BEAM-3304, BEAM-12513] Trigger changes and Windowing. (#15644)


------------------------------------------
[...truncated 345.64 KB...]
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 04, 2021 6:45:19 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 04, 2021 6:45:20 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 04, 2021 6:45:20 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 04, 2021 6:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 6:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 6:45:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 04, 2021 6:45:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 6:45:24 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 6:45:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 04, 2021 6:45:24 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 6:45:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 04, 2021 6:45:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 04, 2021 6:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 04, 2021 6:45:26 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 04, 2021 6:45:30 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 04, 2021 6:45:30 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 04, 2021 6:45:30 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.34.0-SNAPSHOT-h5Vm59YQQa-RcmHCCG09Iw6skDekxZqFrPsGtnvUHB4.jar
    Oct 04, 2021 6:45:30 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.34.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.34.0-SNAPSHOT-tests-Lg8Zg7us2s6VPtPB7f5WrJKFl5dhDd82wMfxmGkALkE.jar
    Oct 04, 2021 6:45:30 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test72140529376117676.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-PR_BJ3ikKm-6LOqRykJusDqgJsTe7zSj7eRmfQJgYoE.jar
    Oct 04, 2021 6:45:30 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 3 files newly uploaded in 0 seconds
    Oct 04, 2021 6:45:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 04, 2021 6:45:30 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104889 bytes, hash bd8630fd7f753a33f54403bf63981e523a3aabf1206713538442f3b9b568f836> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-vYYw_X91OjP1RAO_Y5geUjo6q_EgZxNThELzubVo-DY.pb
    Oct 04, 2021 6:45:33 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 04, 2021 6:45:33 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 04, 2021 6:45:33 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 04, 2021 6:45:33 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 04, 2021 6:45:34 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-04_11_45_33-4273172218913284806?project=apache-beam-testing
    Oct 04, 2021 6:45:34 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-04_11_45_33-4273172218913284806
    Oct 04, 2021 6:45:34 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-04_11_45_33-4273172218913284806
    Oct 04, 2021 6:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-04T18:45:36.636Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 04, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:45:45.198Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 04, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:45:46.072Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 04, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:45:46.102Z: Expanding GroupByKey operations into optimizable parts.
    Oct 04, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:45:46.131Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 04, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:45:46.190Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 04, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:45:46.221Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 04, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:45:46.248Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 04, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:45:46.559Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 04, 2021 6:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:45:46.638Z: Starting 5 workers in us-central1-a...
    Oct 04, 2021 6:45:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:45:50.114Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 04, 2021 6:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:46:33.411Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Oct 04, 2021 6:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:46:33.438Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Oct 04, 2021 6:46:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:46:43.775Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 04, 2021 6:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:47:07.994Z: Workers have started successfully.
    Oct 04, 2021 6:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:47:08.018Z: Workers have started successfully.
    Oct 04, 2021 6:47:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:47:39.652Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 04, 2021 6:47:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:47:39.848Z: Cleaning up.
    Oct 04, 2021 6:47:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:47:39.932Z: Stopping worker pool...
    Oct 04, 2021 6:50:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:50:09.480Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 04, 2021 6:50:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T18:50:09.522Z: Worker pool stopped.
    Oct 04, 2021 6:50:15 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-04_11_45_33-4273172218913284806 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): e862aec7-8559-413b-acc5-561fea98dfe5 and timestamp: 2021-10-04T18:50:15.536000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.527

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 04, 2021 6:50:15 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 5 mins 0.526 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 56s
152 actionable tasks: 101 executed, 51 from cache

Publishing build scan...
https://gradle.com/s/cm55novmyco2i

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2504

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2504/display/redirect>

Changes:


------------------------------------------
[...truncated 337.89 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 04, 2021 12:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 04, 2021 12:44:51 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 04, 2021 12:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 04, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 12:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 04, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 04, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 04, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 04, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 04, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 04, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 04, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 04, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 04, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 04, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 04, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 04, 2021 12:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 04, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 04, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 04, 2021 12:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1815534418436434907.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-a-IMR2wQSFe-GFwdBz22Tbp0Uso2DoIkALbtTEexiH4.jar
    Oct 04, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 04, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 04, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 64d76306aac984008070e125d6aecc21a1e45f4b6708069d0f55aae50b3d5215> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-ZNdjBqrJhACAcOEl1q7MIaHkX0tnCAadD1Wq5Qs9UhU.pb
    Oct 04, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 04, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 04, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 04, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 04, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-04_05_45_03-2710652158831162577?project=apache-beam-testing
    Oct 04, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-04_05_45_03-2710652158831162577
    Oct 04, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-04_05_45_03-2710652158831162577
    Oct 04, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-04T12:45:06.952Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 04, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:45:14.538Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 04, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:45:15.376Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 04, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:45:15.405Z: Expanding GroupByKey operations into optimizable parts.
    Oct 04, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:45:15.442Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 04, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:45:15.518Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 04, 2021 12:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:45:15.543Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 04, 2021 12:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:45:15.575Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 04, 2021 12:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:45:15.970Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 04, 2021 12:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:45:16.039Z: Starting 5 workers in us-central1-c...
    Oct 04, 2021 12:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:45:40.544Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 04, 2021 12:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:46:01.393Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 04, 2021 12:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:46:24.847Z: Workers have started successfully.
    Oct 04, 2021 12:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:46:24.886Z: Workers have started successfully.
    Oct 04, 2021 12:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:46:54.745Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 04, 2021 12:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:46:54.916Z: Cleaning up.
    Oct 04, 2021 12:46:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:46:54.985Z: Stopping worker pool...
    Oct 04, 2021 12:49:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:49:20.778Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 04, 2021 12:49:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T12:49:20.825Z: Worker pool stopped.
    Oct 04, 2021 12:49:27 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-04_05_45_03-2710652158831162577 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): bcbc0d4f-976c-4cfc-b1fb-b0f75a228cab and timestamp: 2021-10-04T12:49:27.844000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.195

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 04, 2021 12:49:28 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 41.167 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 11s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/d6xt5x6bxlooa

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2503

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2503/display/redirect>

Changes:


------------------------------------------
[...truncated 337.87 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 04, 2021 6:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 04, 2021 6:44:53 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 04, 2021 6:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 04, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 04, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 04, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 04, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 04, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 04, 2021 6:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 04, 2021 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 04, 2021 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 04, 2021 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2667136400420258841.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Bb_Z-0M6T2LgCreRBtcaIHOWYi3qftdkG1mTO2GWyaE.jar
    Oct 04, 2021 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 04, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 04, 2021 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 450a2558d4479186103647f4b0977d861861ccfd1a0c98b1bfbd0338e3ebb648> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-RQolWNRHkYYQNkf0sJd9hhhhzP0aDJixv70DOOPrtkg.pb
    Oct 04, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 04, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 04, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 04, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 04, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-03_23_45_05-12198235828307472321?project=apache-beam-testing
    Oct 04, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-03_23_45_05-12198235828307472321
    Oct 04, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-03_23_45_05-12198235828307472321
    Oct 04, 2021 6:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-04T06:45:08.658Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 04, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:45:14.788Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 04, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:45:15.499Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 04, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:45:15.536Z: Expanding GroupByKey operations into optimizable parts.
    Oct 04, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:45:15.562Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 04, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:45:15.625Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 04, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:45:15.650Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 04, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:45:15.681Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 04, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:45:16.006Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 04, 2021 6:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:45:16.080Z: Starting 5 workers in us-central1-c...
    Oct 04, 2021 6:45:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:45:47.382Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 04, 2021 6:45:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:45:49.257Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 04, 2021 6:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:46:23.209Z: Workers have started successfully.
    Oct 04, 2021 6:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:46:23.232Z: Workers have started successfully.
    Oct 04, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:46:52.130Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 04, 2021 6:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:46:52.263Z: Cleaning up.
    Oct 04, 2021 6:46:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:46:52.342Z: Stopping worker pool...
    Oct 04, 2021 6:49:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:49:12.904Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 04, 2021 6:49:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T06:49:12.942Z: Worker pool stopped.
    Oct 04, 2021 6:49:18 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-03_23_45_05-12198235828307472321 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c0ca4b49-f9cd-421a-b444-ecae683e9e86 and timestamp: 2021-10-04T06:49:18.324000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.498

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 04, 2021 6:49:18 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 29.982 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 4m 59s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/gk3pnulmpgp4c

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2502

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2502/display/redirect>

Changes:


------------------------------------------
[...truncated 338.43 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 04, 2021 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 04, 2021 12:44:52 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 04, 2021 12:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 04, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 04, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 04, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 04, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 04, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 04, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 04, 2021 12:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 04, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 04, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 04, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7640060808782137549.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-_O0Ja9OWe15cU_t011NGBV1a8M3VwOqAjZpVDkg-qPI.jar
    Oct 04, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 04, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 04, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 83c757b56fb1694270240bedcfac0542621ff2ca66dffa6368a7e9589c7cfa84> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-g8dXtW-xaUJwJAvtz6wFQmIf8spm3_pjaKfpWJx8-oQ.pb
    Oct 04, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 04, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 04, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 04, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 04, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-03_17_45_04-1600347619784672286?project=apache-beam-testing
    Oct 04, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-03_17_45_04-1600347619784672286
    Oct 04, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-03_17_45_04-1600347619784672286
    Oct 04, 2021 12:45:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-04T00:45:08.100Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 04, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:45:13.680Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 04, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:45:14.480Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 04, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:45:14.520Z: Expanding GroupByKey operations into optimizable parts.
    Oct 04, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:45:14.548Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 04, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:45:14.608Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 04, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:45:14.634Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 04, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:45:14.665Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 04, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:45:15.047Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 04, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:45:15.116Z: Starting 5 workers in us-central1-a...
    Oct 04, 2021 12:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:45:24.474Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 04, 2021 12:46:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:46:00.879Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Oct 04, 2021 12:46:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:46:00.910Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Oct 04, 2021 12:46:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:46:11.234Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 04, 2021 12:46:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:46:30.448Z: Workers have started successfully.
    Oct 04, 2021 12:46:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:46:30.483Z: Workers have started successfully.
    Oct 04, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:46:58.280Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 04, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:46:58.426Z: Cleaning up.
    Oct 04, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:46:58.533Z: Stopping worker pool...
    Oct 04, 2021 12:49:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:49:24.802Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 04, 2021 12:49:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-04T00:49:24.839Z: Worker pool stopped.
    Oct 04, 2021 12:49:32 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-03_17_45_04-1600347619784672286 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c9148af6-bf85-40a5-9cc7-b8a40026c2c7 and timestamp: 2021-10-04T00:49:32.632000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                      6.64

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 04, 2021 12:49:32 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 44.444 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 13s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/g4r7rxbdhxnhe

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2501

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2501/display/redirect>

Changes:


------------------------------------------
[...truncated 337.57 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 03, 2021 6:44:50 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 03, 2021 6:44:51 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 03, 2021 6:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 03, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 03, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 03, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 03, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 03, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 03, 2021 6:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 03, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 03, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 03, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6261414022396518308.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-xJD2kEJYDxv8ItFUGHq5jCW2mFsTZLqZNCbcpTtZIdw.jar
    Oct 03, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 03, 2021 6:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 03, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash ccc9bff7f84cd21d2250b39f5ffc744f971e678c21dd82e9577193ff30d1f315> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-zMm_9_hM0h0iULOfX_x0T5ceZ4wh3YLpV3GT_zDR8xU.pb
    Oct 03, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 03, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 03, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 03, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 03, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-03_11_45_03-16988647387276380313?project=apache-beam-testing
    Oct 03, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-03_11_45_03-16988647387276380313
    Oct 03, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-03_11_45_03-16988647387276380313
    Oct 03, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-03T18:45:06.612Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 03, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:45:12.472Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 03, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:45:13.365Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 03, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:45:13.404Z: Expanding GroupByKey operations into optimizable parts.
    Oct 03, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:45:13.432Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 03, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:45:13.518Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 03, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:45:13.568Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 03, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:45:13.600Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 03, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:45:13.973Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 03, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:45:14.040Z: Starting 5 workers in us-central1-a...
    Oct 03, 2021 6:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:45:30.087Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 03, 2021 6:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:45:59.131Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 03, 2021 6:46:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:46:26.314Z: Workers have started successfully.
    Oct 03, 2021 6:46:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:46:26.339Z: Workers have started successfully.
    Oct 03, 2021 6:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:46:57.079Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 03, 2021 6:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:46:57.217Z: Cleaning up.
    Oct 03, 2021 6:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:46:57.283Z: Stopping worker pool...
    Oct 03, 2021 6:49:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:49:24.186Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 03, 2021 6:49:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T18:49:24.227Z: Worker pool stopped.
    Oct 03, 2021 6:49:30 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-03_11_45_03-16988647387276380313 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8d2be147-4c96-4c97-b094-d3156fae0888 and timestamp: 2021-10-03T18:49:30.564000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.042

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 03, 2021 6:49:31 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 44.139 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 14s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/k6l67cyngzccm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2500

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2500/display/redirect>

Changes:


------------------------------------------
[...truncated 339.36 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 03, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 03, 2021 12:45:03 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 03, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 03, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 03, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 03, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@793252937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 03, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 03, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 03, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 03, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 03, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4433133106638732651.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-Qf3UilYOzWOXyJlTGj3a4ol8g6kpJ5ZkS07Lg0j4f8o.jar
    Oct 03, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 03, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 03, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash f8eafff2db1b48bf88dc40919dd0ddd0941d770305d4e73ac56de9d1f42425b1> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline--Or_8tsbSL-I3ECRndDd0JQddwMF1Oc6xW3p0fQkJbE.pb
    Oct 03, 2021 12:45:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 03, 2021 12:45:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 03, 2021 12:45:16 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 03, 2021 12:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 03, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-03_05_45_16-12243917676759023265?project=apache-beam-testing
    Oct 03, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-03_05_45_16-12243917676759023265
    Oct 03, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-03_05_45_16-12243917676759023265
    Oct 03, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-03T12:45:21.940Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 03, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:45:27.223Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 03, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:45:27.951Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 03, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:45:27.988Z: Expanding GroupByKey operations into optimizable parts.
    Oct 03, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:45:28.036Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 03, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:45:28.096Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 03, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:45:28.121Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 03, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:45:28.158Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 03, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:45:28.502Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 03, 2021 12:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:45:28.582Z: Starting 5 workers in us-central1-a...
    Oct 03, 2021 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:45:50.717Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 03, 2021 12:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:46:13.091Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Oct 03, 2021 12:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:46:13.122Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Oct 03, 2021 12:46:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:46:23.442Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 03, 2021 12:46:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:46:43.951Z: Workers have started successfully.
    Oct 03, 2021 12:46:44 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:46:43.988Z: Workers have started successfully.
    Oct 03, 2021 12:47:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:47:15.720Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 03, 2021 12:47:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:47:15.861Z: Cleaning up.
    Oct 03, 2021 12:47:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:47:15.948Z: Stopping worker pool...
    Oct 03, 2021 12:49:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:49:36.439Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 03, 2021 12:49:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T12:49:36.483Z: Worker pool stopped.
    Oct 03, 2021 12:49:41 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-03_05_45_16-12243917676759023265 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4b515cf7-04e3-4a61-8f3a-0f1697163c65 and timestamp: 2021-10-03T12:49:41.704000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.184

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 03, 2021 12:49:42 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 44.84 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 22s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/gg7przbcx7dfs

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2499

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2499/display/redirect>

Changes:


------------------------------------------
[...truncated 340.28 KB...]
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 03, 2021 6:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 03, 2021 6:44:56 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 03, 2021 6:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 03, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 03, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 03, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 03, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 03, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 03, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 03, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 03, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 03, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5527341037863145142.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ldCjmuz0bi2z703eO-kB9-TVejm2o8cJKIVQDSJHXi0.jar
    Oct 03, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.rnorth.duct-tape/duct-tape/1.0.8/92edc22a9ab2f3e17c9bf700aaee377d50e8b530/duct-tape-1.0.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/duct-tape-1.0.8-Mc7xLd7JedH4bXz3CMQaF9pSPQXGhf1mQunQsq3bckA.jar
    Oct 03, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.java.dev.jna/jna/5.5.0/e0845217c4907822403912ad6828d8e0b256208/jna-5.5.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jna-5.5.0-swj66_5O1AnehBDgpjLRZLISawNfbqz_lo05CMr7TZ4.jar
    Oct 03, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.rnorth.visible-assertions/visible-assertions/2.1.2/20d31a578030ec8e941888537267d3123c2ad1c1/visible-assertions-2.1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/visible-assertions-2.1.2-RQSulosjfNzcto_1sHqmOr5JkvkHp3w9YSCqm5BBQBw.jar
    Oct 03, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.java.dev.jna/jna-platform/4.0.0/deb6bf66918989b50209b8c9aaf3b2561af7f011/jna-platform-4.0.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jna-platform-4.0.0-B21i7Yfna9yzdQ_-gKpJXuIRK9chJmOVTWVH9IJEkuk.jar
    Oct 03, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 245 files cached, 5 files newly uploaded in 0 seconds
    Oct 03, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 03, 2021 6:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash c6679677be01e2c3c4bb0d49fc728ab65ece3941587bdd46075f16e07952e5a1> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-xmeWd74B4sPEuw1J_HKKtl7OOUFYe91GB18W4HlS5aE.pb
    Oct 03, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 03, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 03, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 03, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 03, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-02_23_45_09-11935331738148933063?project=apache-beam-testing
    Oct 03, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-02_23_45_09-11935331738148933063
    Oct 03, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-02_23_45_09-11935331738148933063
    Oct 03, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-03T06:45:12.715Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 03, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:45:18.220Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 03, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:45:18.870Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 03, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:45:18.906Z: Expanding GroupByKey operations into optimizable parts.
    Oct 03, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:45:18.932Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 03, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:45:18.995Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 03, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:45:19.022Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 03, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:45:19.042Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 03, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:45:19.385Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 03, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:45:19.461Z: Starting 5 workers in us-central1-a...
    Oct 03, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:45:35.885Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 03, 2021 6:46:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:46:01.146Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 03, 2021 6:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:46:27.186Z: Workers have started successfully.
    Oct 03, 2021 6:46:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:46:27.216Z: Workers have started successfully.
    Oct 03, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:46:55.302Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 03, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:46:55.426Z: Cleaning up.
    Oct 03, 2021 6:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:46:55.501Z: Stopping worker pool...
    Oct 03, 2021 6:49:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:49:23.087Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 03, 2021 6:49:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T06:49:23.136Z: Worker pool stopped.
    Oct 03, 2021 6:49:30 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-02_23_45_09-11935331738148933063 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): aa063f72-c3a3-4f32-81b6-cc209e47ed4e and timestamp: 2021-10-03T06:49:30.085000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.325

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 03, 2021 6:49:30 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 38.779 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 10s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/wsd6paxz33bg2

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2498

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2498/display/redirect>

Changes:


------------------------------------------
[...truncated 339.30 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 03, 2021 12:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 03, 2021 12:44:56 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 03, 2021 12:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 03, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 03, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@793252937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 03, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 03, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 03, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 03, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 03, 2021 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 03, 2021 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 03, 2021 12:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test924415722746679332.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test--TrSaHjGfQDLqi9c4923o6CG9tiTeH0tOQo81B3STRw.jar
    Oct 03, 2021 12:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 03, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 03, 2021 12:45:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104891 bytes, hash 2378d85467543a1a3b5e0d3948b43275ba3f975a1b25eef73fbda532a3bf4a17> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-I3jYVGdUOho7Xg05SLQydbo_l1obJe73P72lMqO_Shc.pb
    Oct 03, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 03, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 03, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 03, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 03, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-02_17_45_08-847305522168178165?project=apache-beam-testing
    Oct 03, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-02_17_45_08-847305522168178165
    Oct 03, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-02_17_45_08-847305522168178165
    Oct 03, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-03T00:45:14.054Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 03, 2021 12:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:45:19.984Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 03, 2021 12:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:45:20.834Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 03, 2021 12:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:45:20.876Z: Expanding GroupByKey operations into optimizable parts.
    Oct 03, 2021 12:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:45:20.911Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 03, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:45:20.984Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 03, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:45:21.010Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 03, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:45:21.041Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 03, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:45:21.390Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 03, 2021 12:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:45:21.446Z: Starting 5 workers in us-central1-c...
    Oct 03, 2021 12:45:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:45:34.512Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 03, 2021 12:46:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:46:05.764Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 03, 2021 12:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:46:31.377Z: Workers have started successfully.
    Oct 03, 2021 12:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:46:31.412Z: Workers have started successfully.
    Oct 03, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:46:59.267Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 03, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:46:59.464Z: Cleaning up.
    Oct 03, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:46:59.538Z: Stopping worker pool...
    Oct 03, 2021 12:49:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:49:24.875Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 03, 2021 12:49:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-03T00:49:24.919Z: Worker pool stopped.
    Oct 03, 2021 12:49:32 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-02_17_45_08-847305522168178165 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 76ceceec-88f9-4777-8b6c-bd8bf58d3eeb and timestamp: 2021-10-03T00:49:32.184000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.986

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 03, 2021 12:49:32 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.021 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 40.668 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 14s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/koe2y4us665ni

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2497

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2497/display/redirect>

Changes:


------------------------------------------
[...truncated 338.04 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 02, 2021 6:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 02, 2021 6:44:51 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 02, 2021 6:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 02, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 6:44:54 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 02, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 6:44:55 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 02, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@70049737]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 02, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 02, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 02, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 02, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 6:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 02, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@138522610]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 02, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 02, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 02, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 02, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 02, 2021 6:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 02, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 02, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 02, 2021 6:45:00 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2270684606588946108.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-aI3GlyaS01dmmjHAWA8z2L4kT-2mYbi05Qb1Pzn6Bas.jar
    Oct 02, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 02, 2021 6:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 02, 2021 6:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 7e76b48122056d8d57589b41fd006d6f26ee28106fa96c9e64e7d6cf4024fe99> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-fna0gSIFbY1XWJtB_QBtbybuKBBvqWyeZOfWz0Ak_pk.pb
    Oct 02, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 02, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 02, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 02, 2021 6:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 02, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-02_11_45_03-10805159815140293046?project=apache-beam-testing
    Oct 02, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-02_11_45_03-10805159815140293046
    Oct 02, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-02_11_45_03-10805159815140293046
    Oct 02, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-02T18:45:07.210Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 02, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:45:12.122Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 02, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:45:12.765Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 02, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:45:12.803Z: Expanding GroupByKey operations into optimizable parts.
    Oct 02, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:45:12.835Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 02, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:45:12.904Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 02, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:45:12.920Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 02, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:45:12.959Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 02, 2021 6:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:45:13.277Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 02, 2021 6:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:45:13.348Z: Starting 5 workers in us-central1-a...
    Oct 02, 2021 6:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:45:43.222Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 02, 2021 6:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:45:59.456Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 02, 2021 6:46:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:46:24.817Z: Workers have started successfully.
    Oct 02, 2021 6:46:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:46:24.842Z: Workers have started successfully.
    Oct 02, 2021 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:46:51.107Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 02, 2021 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:46:51.231Z: Cleaning up.
    Oct 02, 2021 6:46:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:46:51.302Z: Stopping worker pool...
    Oct 02, 2021 6:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:49:16.327Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 02, 2021 6:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T18:49:16.366Z: Worker pool stopped.
    Oct 02, 2021 6:49:24 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-02_11_45_03-10805159815140293046 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 50b2eac8-5528-42f0-8769-ddffd4ba8580 and timestamp: 2021-10-02T18:49:24.414000000Z:
                     Metric:                    Value:
                   read_time                     6.322
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 02, 2021 6:49:24 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 37.51 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 7s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/opgofo7adrz4o

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2496

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2496/display/redirect>

Changes:


------------------------------------------
[...truncated 339.09 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 02, 2021 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 02, 2021 12:44:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 02, 2021 12:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 02, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 02, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 02, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 02, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 02, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 02, 2021 12:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 02, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 02, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 02, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test33187000213534086.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-s0Ua8XEPsFQPPuOm96uiigzcXG-39oKQ9e8AXeUrHS8.jar
    Oct 02, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.alibaba/fastjson/1.2.69/6cb063f1d527ff65bdbb9ea74888a5ffc3f92197/fastjson-1.2.69.jar to gs://temp-storage-for-perf-tests/loadtests/staging/fastjson-1.2.69-KniRdEoDeOAJmpuB3b7U0uGn6GMvb4b4_8-jg-UeScg.jar
    Oct 02, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Oct 02, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 02, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 0dc756c4b389ce0fa11003af05bca7f297ddf24ca271270ad5807b7d2e746487> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-DcdWxLOJzg-hEAOvBbyn8pfd8kyicScK1YB7fS50ZIc.pb
    Oct 02, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 02, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 02, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 02, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 02, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-02_05_45_07-11784265171889707970?project=apache-beam-testing
    Oct 02, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-02_05_45_07-11784265171889707970
    Oct 02, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-02_05_45_07-11784265171889707970
    Oct 02, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-02T12:45:10.607Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 02, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:17.109Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 02, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:17.936Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 02, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:17.968Z: Expanding GroupByKey operations into optimizable parts.
    Oct 02, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:18.004Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 02, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:18.081Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 02, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:18.107Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 02, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:18.144Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 02, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:18.480Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 02, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:18.550Z: Starting 5 workers in us-central1-c...
    Oct 02, 2021 12:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:44.525Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 02, 2021 12:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:49.478Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Oct 02, 2021 12:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:49.509Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Oct 02, 2021 12:45:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:45:59.755Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 02, 2021 12:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:46:23.466Z: Workers have started successfully.
    Oct 02, 2021 12:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:46:23.500Z: Workers have started successfully.
    Oct 02, 2021 12:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:46:52.468Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 02, 2021 12:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:46:52.591Z: Cleaning up.
    Oct 02, 2021 12:46:53 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:46:52.665Z: Stopping worker pool...
    Oct 02, 2021 12:49:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:49:17.724Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 02, 2021 12:49:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T12:49:17.772Z: Worker pool stopped.
    Oct 02, 2021 12:49:24 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-02_05_45_07-11784265171889707970 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): dedadd50-3e74-47e6-9654-b427973b8b6b and timestamp: 2021-10-02T12:49:24.131000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.222

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 02, 2021 12:49:24 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 34.872 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 5s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/yxcpdea4k4l5w

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2495

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2495/display/redirect?page=changes>

Changes:

[noreply] [BEAM-12513] Schemas and Coders (#15632)


------------------------------------------
[...truncated 337.54 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':',5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 02, 2021 6:44:50 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 02, 2021 6:44:51 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 02, 2021 6:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 02, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 6:44:54 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 02, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 02, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 02, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 02, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 02, 2021 6:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 02, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 02, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 02, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3291093574128526183.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-JQLYC_B7axlOWx2YCYh9nveYzWLbXvACd1k_Cr3TckU.jar
    Oct 02, 2021 6:45:00 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 02, 2021 6:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 02, 2021 6:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash cdf0481c711dd159373ac3b3bc44104c72ecebf31f065a5590122e6ffd5e8ed7> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-zfBIHHEd0Vk3OsOzvEQQTHLs6_MfBlpVkBIub_1ejtc.pb
    Oct 02, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 02, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 02, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 02, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 02, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-01_23_45_03-2032330720690453059?project=apache-beam-testing
    Oct 02, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-01_23_45_03-2032330720690453059
    Oct 02, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-01_23_45_03-2032330720690453059
    Oct 02, 2021 6:45:09 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-02T06:45:06.938Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 02, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:45:13.276Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 02, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:45:14.015Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 02, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:45:14.055Z: Expanding GroupByKey operations into optimizable parts.
    Oct 02, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:45:14.081Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 02, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:45:14.144Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 02, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:45:14.170Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 02, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:45:14.203Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 02, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:45:14.506Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 02, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:45:14.572Z: Starting 5 workers in us-central1-c...
    Oct 02, 2021 6:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:45:37.970Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 02, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:45:49.961Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 02, 2021 6:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:46:24.150Z: Workers have started successfully.
    Oct 02, 2021 6:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:46:24.180Z: Workers have started successfully.
    Oct 02, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:46:51.961Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 02, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:46:52.111Z: Cleaning up.
    Oct 02, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:46:52.184Z: Stopping worker pool...
    Oct 02, 2021 6:49:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:49:12.021Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 02, 2021 6:49:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T06:49:12.123Z: Worker pool stopped.
    Oct 02, 2021 6:49:17 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-01_23_45_03-2032330720690453059 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 548c4b23-6edb-4267-9ef5-fe7adade5bf8 and timestamp: 2021-10-02T06:49:17.096000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.932

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 02, 2021 6:49:17 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 30.558 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 0s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/wqhy3kqscxoyc

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2494

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2494/display/redirect?page=changes>

Changes:

[noreply] [BEAM-11217] Implemented metrics filtering (#15482)


------------------------------------------
[...truncated 337.35 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 02, 2021 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 02, 2021 12:44:53 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 02, 2021 12:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 02, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 02, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 02, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 02, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 02, 2021 12:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 02, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 02, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 02, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3974640514448782353.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-7HGRObmaSGVFS8QV2P8thwn5FmC6eCrknOuT0Bf2cyk.jar
    Oct 02, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Oct 02, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 02, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 2bd99464ec2b6ff3616614c2a789ecb41b73f4d396080cab872b45206c25591d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-K9mUZOwrb_NhZhTCp4nstBtz9NOWCAyrhytFIGwlWR0.pb
    Oct 02, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 02, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 02, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 02, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 02, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-01_17_45_05-1229165175629021093?project=apache-beam-testing
    Oct 02, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-01_17_45_05-1229165175629021093
    Oct 02, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-01_17_45_05-1229165175629021093
    Oct 02, 2021 12:45:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-02T00:45:08.604Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 02, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:45:15.499Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 02, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:45:16.183Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 02, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:45:16.225Z: Expanding GroupByKey operations into optimizable parts.
    Oct 02, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:45:16.259Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 02, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:45:16.334Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 02, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:45:16.357Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 02, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:45:16.381Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 02, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:45:16.745Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 02, 2021 12:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:45:16.816Z: Starting 5 workers in us-central1-c...
    Oct 02, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:45:28.234Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 02, 2021 12:46:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:46:00.975Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 02, 2021 12:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:46:26.187Z: Workers have started successfully.
    Oct 02, 2021 12:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:46:26.212Z: Workers have started successfully.
    Oct 02, 2021 12:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:46:56.483Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 02, 2021 12:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:46:56.671Z: Cleaning up.
    Oct 02, 2021 12:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:46:56.794Z: Stopping worker pool...
    Oct 02, 2021 12:49:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:49:15.664Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 02, 2021 12:49:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-02T00:49:15.711Z: Worker pool stopped.
    Oct 02, 2021 12:49:20 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-01_17_45_05-1229165175629021093 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4f713dcc-a53e-44f9-bef7-5aaf4a26ab5a and timestamp: 2021-10-02T00:49:20.830000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    10.426

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 02, 2021 12:49:21 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 32.389 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 2s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/rmy5gwhhtwwa2

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2493

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2493/display/redirect?page=changes>

Changes:

[noreply] Switching to innerText and removing alerts

[noreply] [BEAM-12993] Update to Debezium 1.7.0.Final (#15636)

[noreply] Minor: Fix `Iterable[SplitResultResidual]` type errors (#15634)


------------------------------------------
[...truncated 338.04 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 01, 2021 6:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 01, 2021 6:45:00 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 01, 2021 6:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 01, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 01, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 01, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 01, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 01, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 01, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 01, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 01, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 01, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 01, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 01, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 01, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 01, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 01, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 01, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 01, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4824795570540795047.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-ZSZHzKZ7N_SQD-MZgfeZ7Ndjbohxq-Ok3LUbeo0IcVo.jar
    Oct 01, 2021 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 1 seconds
    Oct 01, 2021 6:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 01, 2021 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash b3c4d4458a3186eb4ce590de1e748a9f4e5d5f53cc93309e911437bc92446ff9> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-s8TURYoxhutM5ZDeHnSKn05dX1PMkzCekRQ3vJJEb_k.pb
    Oct 01, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 01, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 01, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 01, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 01, 2021 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-01_11_45_13-142456295206191151?project=apache-beam-testing
    Oct 01, 2021 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-01_11_45_13-142456295206191151
    Oct 01, 2021 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-01_11_45_13-142456295206191151
    Oct 01, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-01T18:45:17.088Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 01, 2021 6:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:45:23.150Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 01, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:45:23.930Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 01, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:45:23.963Z: Expanding GroupByKey operations into optimizable parts.
    Oct 01, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:45:23.992Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 01, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:45:24.079Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 01, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:45:24.106Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 01, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:45:24.121Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 01, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:45:24.452Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 01, 2021 6:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:45:24.525Z: Starting 5 workers in us-central1-a...
    Oct 01, 2021 6:45:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:45:29.389Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 01, 2021 6:46:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:46:06.164Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 01, 2021 6:46:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:46:34.645Z: Workers have started successfully.
    Oct 01, 2021 6:46:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:46:34.687Z: Workers have started successfully.
    Oct 01, 2021 6:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:47:01.442Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 01, 2021 6:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:47:01.597Z: Cleaning up.
    Oct 01, 2021 6:47:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:47:01.670Z: Stopping worker pool...
    Oct 01, 2021 6:49:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:49:23.526Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 01, 2021 6:49:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T18:49:23.568Z: Worker pool stopped.
    Oct 01, 2021 6:49:29 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-01_11_45_13-142456295206191151 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 64c56dab-ce87-4057-b12d-fc9473e5b8f2 and timestamp: 2021-10-01T18:49:29.905000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.038

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 01, 2021 6:49:30 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 34.491 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 7s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/rvugmzkytt6qg

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2492

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2492/display/redirect>

Changes:


------------------------------------------
[...truncated 339.39 KB...]
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 01, 2021 12:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 01, 2021 12:44:57 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 01, 2021 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 01, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 01, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 01, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@850165773]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@793252937]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 01, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 01, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 01, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 01, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 01, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4723206100083085531.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-qEpWi-6a7u1_PK_4qWoBtbdwxjaeiG6zedVyYTv0MG0.jar
    Oct 01, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.beam/beam-vendor-calcite-1_26_0/0.1/5fae4e97a2d8739462bd1572e48d01228766b6ef/beam-vendor-calcite-1_26_0-0.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/beam-vendor-calcite-1_26_0-0.1-pYZ7esxRWyhKmBqBdfrpnxvg8woyykTvGbaCvLtyRyA.jar
    Oct 01, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 1 seconds
    Oct 01, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 01, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 33ad4cc501abdb54fba5f37c03d455d33c6347eb4a1246cc2efe46128fcb61af> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-M61MxQGr21T7pfN8A9RV0zxjR-tKEkbMLv5GEo_LYa8.pb
    Oct 01, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 01, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 01, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 01, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 01, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-10-01_05_45_11-5449097116956767890?project=apache-beam-testing
    Oct 01, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-10-01_05_45_11-5449097116956767890
    Oct 01, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-10-01_05_45_11-5449097116956767890
    Oct 01, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-01T12:45:16.076Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 01, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:45:24.067Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 01, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:45:24.891Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 01, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:45:24.937Z: Expanding GroupByKey operations into optimizable parts.
    Oct 01, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:45:24.972Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 01, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:45:25.073Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 01, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:45:25.099Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 01, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:45:25.140Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 01, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:45:25.644Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 01, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:45:25.756Z: Starting 5 workers in us-central1-c...
    Oct 01, 2021 12:45:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:45:48.950Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 01, 2021 12:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:46:10.070Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 01, 2021 12:46:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:46:34.013Z: Workers have started successfully.
    Oct 01, 2021 12:46:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:46:34.044Z: Workers have started successfully.
    Oct 01, 2021 12:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:47:03.394Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 01, 2021 12:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:47:03.554Z: Cleaning up.
    Oct 01, 2021 12:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:47:03.634Z: Stopping worker pool...
    Oct 01, 2021 12:49:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:49:20.174Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 01, 2021 12:49:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T12:49:20.237Z: Worker pool stopped.
    Oct 01, 2021 12:49:27 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-10-01_05_45_11-5449097116956767890 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d0ab1099-69b9-40ed-9191-e65610971fe5 and timestamp: 2021-10-01T12:49:27.722000000Z:
                     Metric:                    Value:
                   read_time                     9.199
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 01, 2021 12:49:28 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 34.786 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 8s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/5jh474kjyd6vq

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2491

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2491/display/redirect>

Changes:


------------------------------------------
[...truncated 337.78 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':',5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 01, 2021 6:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 01, 2021 6:44:58 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 01, 2021 6:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 01, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 01, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 01, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 01, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 01, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 01, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 01, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 01, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@688000764]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 01, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 01, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 01, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 01, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 01, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 01, 2021 6:45:11 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 01, 2021 6:45:11 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 01, 2021 6:45:12 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3299828175081092784.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-7FktRZ62Gz2eHphKeQzeql6S6amjKEg1_JupBewy7bU.jar
    Oct 01, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 1 seconds
    Oct 01, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 01, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 44c58d35db3a08d70e83108a6ce29ae09f8ab7d0e8180073f4eb362973880439> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-RMWNNds6CNcOgxCKbOKa4J-Kt9DoGABz9Os2KXOIBDk.pb
    Oct 01, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 01, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 01, 2021 6:45:17 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 01, 2021 6:45:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 01, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-30_23_45_17-15965796272908244087?project=apache-beam-testing
    Oct 01, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-30_23_45_17-15965796272908244087
    Oct 01, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-30_23_45_17-15965796272908244087
    Oct 01, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-01T06:45:20.619Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 01, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:45:27.388Z: Worker configuration: e2-standard-2 in us-central1-c.
    Oct 01, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:45:28.080Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 01, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:45:28.121Z: Expanding GroupByKey operations into optimizable parts.
    Oct 01, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:45:28.157Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 01, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:45:28.232Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 01, 2021 6:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:45:28.266Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 01, 2021 6:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:45:28.299Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 01, 2021 6:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:45:28.979Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 01, 2021 6:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:45:29.066Z: Starting 5 workers in us-central1-c...
    Oct 01, 2021 6:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:45:56.913Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 01, 2021 6:46:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:46:12.087Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 01, 2021 6:46:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:46:39.907Z: Workers have started successfully.
    Oct 01, 2021 6:46:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:46:39.944Z: Workers have started successfully.
    Oct 01, 2021 6:47:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:47:11.239Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 01, 2021 6:47:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:47:11.390Z: Cleaning up.
    Oct 01, 2021 6:47:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:47:11.481Z: Stopping worker pool...
    Oct 01, 2021 6:49:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:49:38.376Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 01, 2021 6:49:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T06:49:38.417Z: Worker pool stopped.
    Oct 01, 2021 6:49:46 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-30_23_45_17-15965796272908244087 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f683088c-36be-4160-b47e-ac6bcabbe3bc and timestamp: 2021-10-01T06:49:46.231000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.766

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 01, 2021 6:49:46 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 53.614 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 26s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/kbjuzj36jtjba

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2490

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2490/display/redirect?page=changes>

Changes:

[dpcollins] [BEAM-12908] Change to use PubsubSignal for information propagation so

[Robert Bradshaw] Preserve more types in transform replacement.

[Robert Bradshaw] Update test to reflect preserved type hint.

[noreply] [BEAM-11985] Python Bigtable - Implement IO Request Count metrics

[noreply] [BEAM-9918] Make TryCrossLanguage match non Try API (#15633)

[noreply] [BEAM-12957] Add support for pyarrow 5.x (#15588)


------------------------------------------
[...truncated 341.46 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 3d8d79e60883a9ead96d04050fca6504
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Oct 01, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Oct 01, 2021 12:45:10 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Oct 01, 2021 12:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Oct 01, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 12:45:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 12:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 01, 2021 12:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 12:45:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 12:45:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 01, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Oct 01, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 01, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 01, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Oct 01, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 01, 2021 12:45:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 01, 2021 12:45:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 12:45:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 12:45:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Oct 01, 2021 12:45:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Oct 01, 2021 12:45:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Oct 01, 2021 12:45:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Oct 01, 2021 12:45:16 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Oct 01, 2021 12:45:16 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Oct 01, 2021 12:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Oct 01, 2021 12:45:19 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Oct 01, 2021 12:45:19 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Oct 01, 2021 12:45:19 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar to gs://temp-storage-for-perf-tests/loadtests/staging/gradle-worker-qOxdI4KUjo3GTP9rF7KeqZGuhs9XeyH-VdOKR6HUq6Q.jar
    Oct 01, 2021 12:45:19 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1429108089442257848.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-w7MG2wlG_XkuF6bjtEgQNkbrEnEKiP3pFRtFqP2J6pE.jar
    Oct 01, 2021 12:45:20 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Oct 01, 2021 12:45:20 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Oct 01, 2021 12:45:20 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104891 bytes, hash 0184a43e3e51cd74e76d0438af8bb9c64cb96b68908e74269f492224778065c7> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-AYSkPj5RzXTnbQQ4r4u5xky5a2iQjnQmn0kiJHeAZcc.pb
    Oct 01, 2021 12:45:22 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Oct 01, 2021 12:45:22 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Oct 01, 2021 12:45:22 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Oct 01, 2021 12:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Oct 01, 2021 12:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-30_17_45_23-9121831774679467180?project=apache-beam-testing
    Oct 01, 2021 12:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-30_17_45_23-9121831774679467180
    Oct 01, 2021 12:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-30_17_45_23-9121831774679467180
    Oct 01, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-10-01T00:45:27.377Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Oct 01, 2021 12:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:45:33.914Z: Worker configuration: e2-standard-2 in us-central1-a.
    Oct 01, 2021 12:45:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:45:34.799Z: Expanding CoGroupByKey operations into optimizable parts.
    Oct 01, 2021 12:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:45:34.840Z: Expanding GroupByKey operations into optimizable parts.
    Oct 01, 2021 12:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:45:34.877Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Oct 01, 2021 12:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:45:34.975Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Oct 01, 2021 12:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:45:35.013Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Oct 01, 2021 12:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:45:35.032Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Oct 01, 2021 12:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:45:35.349Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 01, 2021 12:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:45:35.429Z: Starting 5 workers in us-central1-a...
    Oct 01, 2021 12:45:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:45:57.313Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Oct 01, 2021 12:46:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:46:16.336Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Oct 01, 2021 12:46:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:46:16.362Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Oct 01, 2021 12:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:46:26.720Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Oct 01, 2021 12:46:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:46:49.531Z: Workers have started successfully.
    Oct 01, 2021 12:46:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:46:49.548Z: Workers have started successfully.
    Oct 01, 2021 12:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:47:17.957Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Oct 01, 2021 12:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:47:18.091Z: Cleaning up.
    Oct 01, 2021 12:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:47:18.173Z: Stopping worker pool...
    Oct 01, 2021 12:49:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:49:52.733Z: Autoscaling: Resized worker pool from 5 to 0.
    Oct 01, 2021 12:49:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-10-01T00:49:52.763Z: Worker pool stopped.
    Oct 01, 2021 12:49:58 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-30_17_45_23-9121831774679467180 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 47d31f15-03a8-4eb1-bea7-de151ad3e716 and timestamp: 2021-10-01T00:49:59.046000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.271

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Oct 01, 2021 12:49:59 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 8,5,main]) completed. Took 4 mins 53.151 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 40s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/ltxf7omg5btf6

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2489

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2489/display/redirect?page=changes>

Changes:

[stranniknm] [BEAM-12951]: Create initial structure for Playground application

[rohde.samuel] Fix BEAM-12984

[noreply] Update Beam glossary (#15619)


------------------------------------------
[...truncated 344.31 KB...]
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 30, 2021 6:46:59 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 30, 2021 6:47:01 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 30, 2021 6:47:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 6:47:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 6:47:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 30, 2021 6:47:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 6:47:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 6:47:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 30, 2021 6:47:15 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@415755770]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 30, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 30, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 30, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 30, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 30, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@991913540]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 30, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 30, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 30, 2021 6:47:19 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 30, 2021 6:47:20 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 30, 2021 6:47:20 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 30, 2021 6:47:29 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 30, 2021 6:47:29 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Sep 30, 2021 6:47:31 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7139866847520102503.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-sUITyF-jIcOE0DPCp5g8ar_3Wzik35VTFkPaHmK9Nik.jar
    Sep 30, 2021 6:47:33 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.testcontainers/kafka/1.15.1/ccd374ca7e5d8b06fb31ecffeb03abc42feada84/kafka-1.15.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/kafka-1.15.1-Z4qZ3jdWXDvdB7xD9VivkdVQiRZWNWUhPSIANVCSZDA.jar
    Sep 30, 2021 6:47:33 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.testcontainers/testcontainers/1.15.1/91e6dfab8f141f77c6a0dd147a94bd186993a22c/testcontainers-1.15.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/testcontainers-1.15.1-_fMrLpRXSowMnAISRc73bMk5UqrXNnsv6vkPXOZRXu0.jar
    Sep 30, 2021 6:47:33 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/de.flapdoodle.embed/de.flapdoodle.embed.mongo/2.2.0/781d14f4e3d9eeb0b4c3e00a4ec165a04b3dc5c4/de.flapdoodle.embed.mongo-2.2.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/de.flapdoodle.embed.mongo-2.2.0-vNy3lJC0jW9u4Cy1AHsqSbjRUqOTX9ycpEmHkht7vvk.jar
    Sep 30, 2021 6:47:33 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-api/3.2.7/81408fc988c229ea11354fee9902c47842343f04/docker-java-api-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-api-3.2.7-AwtXDacySf3gUHoixVjjNqW0wUI0YcGJjMO-N7kxBho.jar
    Sep 30, 2021 6:47:34 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-csv/1.8/37ca9a9aa2d4be2599e55506a6d3170dd7a3df4/commons-csv-1.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-csv-1.8-qL1WZS7UZo2dWjOZSuUvWbnjnI6w68tmhOaK7udXmmE.jar
    Sep 30, 2021 6:47:34 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.janino/commons-compiler/3.0.11/f2a6ec7fbc929c9fc87ff8bb486c0574951c5b09/commons-compiler-3.0.11.jar to gs://temp-storage-for-perf-tests/loadtests/staging/commons-compiler-3.0.11-DxpPXyZccBoxkzJErnBF_O8YtPpZUEF-Je5wvlDd2s8.jar
    Sep 30, 2021 6:47:34 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.codehaus.janino/janino/3.0.11/e699e368095379ba0402ea1780a87fcaea16e68f/janino-3.0.11.jar to gs://temp-storage-for-perf-tests/loadtests/staging/janino-3.0.11-kje3HSMpGA5ZIQ6aqhAO4xNFTvCuWIYIx1yxkxlZG-E.jar
    Sep 30, 2021 6:47:34 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/de.flapdoodle.embed/de.flapdoodle.embed.process/2.1.2/986b38302fa10018d5baccee7f655c31ee9afe5b/de.flapdoodle.embed.process-2.1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/de.flapdoodle.embed.process-2.1.2-OasY7D5KRAimcZcWcjFwgi8Qb4B-iff1FfrVeNSih6A.jar
    Sep 30, 2021 6:47:34 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-transport-zerodep/3.2.7/d74cb05fabc57e9ec441dc39d0bc58ad649fad3d/docker-java-transport-zerodep-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-transport-zerodep-3.2.7-O2a9rUb-899yDZMu4E8j9ItFOkNoggqbLP2xaLmOwJY.jar
    Sep 30, 2021 6:47:35 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.github.docker-java/docker-java-transport/3.2.7/315903a129f530422747efc163dd255f0fa2555e/docker-java-transport-3.2.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/docker-java-transport-3.2.7-ynNXlSXhDSyiLCXkA4c9E9_HGeKB7-wbpEzmswFrI-A.jar
    Sep 30, 2021 6:47:35 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 239 files cached, 11 files newly uploaded in 6 seconds
    Sep 30, 2021 6:47:36 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 30, 2021 6:47:36 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash c9aef7afd66ec0f6fc4129cd0a16db636b491f19f7c80ca12c3146d9d700016e> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-ya73r9ZuwPb8QSnNChbbY2tJHxn3yAyhLDFG2dcAAW4.pb
    Sep 30, 2021 6:47:42 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 30, 2021 6:47:43 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 30, 2021 6:47:43 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 30, 2021 6:47:44 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 30, 2021 6:47:44 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-30_11_47_44-14843811000132432781?project=apache-beam-testing
    Sep 30, 2021 6:47:44 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-30_11_47_44-14843811000132432781
    Sep 30, 2021 6:47:44 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-30_11_47_44-14843811000132432781
    Sep 30, 2021 6:47:52 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-30T18:47:50.444Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 30, 2021 6:48:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:47:59.801Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 30, 2021 6:48:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:48:00.631Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 30, 2021 6:48:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:48:00.678Z: Expanding GroupByKey operations into optimizable parts.
    Sep 30, 2021 6:48:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:48:00.711Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 30, 2021 6:48:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:48:00.775Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 30, 2021 6:48:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:48:00.823Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 30, 2021 6:48:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:48:00.856Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 30, 2021 6:48:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:48:01.271Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 30, 2021 6:48:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:48:01.351Z: Starting 5 workers in us-central1-c...
    Sep 30, 2021 6:48:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:48:28.732Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 30, 2021 6:48:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:48:37.174Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Sep 30, 2021 6:48:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:48:37.205Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Sep 30, 2021 6:48:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:48:47.443Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 30, 2021 6:49:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:49:11.591Z: Workers have started successfully.
    Sep 30, 2021 6:49:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:49:11.619Z: Workers have started successfully.
    Sep 30, 2021 6:49:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:49:41.139Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 30, 2021 6:49:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:49:41.266Z: Cleaning up.
    Sep 30, 2021 6:49:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:49:41.345Z: Stopping worker pool...
    Sep 30, 2021 6:52:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:52:03.674Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 30, 2021 6:52:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T18:52:03.851Z: Worker pool stopped.
    Sep 30, 2021 6:52:12 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-30_11_47_44-14843811000132432781 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): e7dd9959-3546-470f-b447-57a5bb72fad4 and timestamp: 2021-09-30T18:52:12.078000000Z:
                     Metric:                    Value:
                   read_time                      9.98
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 30, 2021 6:52:12 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.058 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.075 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 5 mins 30.206 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 19s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/mny4r4enhidhy

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2488

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2488/display/redirect>

Changes:


------------------------------------------
[...truncated 339.34 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 42de1e1f803ec917230cdec58ad48498
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 30, 2021 12:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 30, 2021 12:44:52 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 30, 2021 12:44:52 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 30, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 30, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 30, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 30, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 30, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 30, 2021 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 30, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 30, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Sep 30, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test888665778287048086.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-CFgx9LkNDXWIzF4L_Q8WWHAUnGvAvMK_daSGkpVuM8k.jar
    Sep 30, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 30, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 30, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104892 bytes, hash b4bf1acc6157a06cd79cecbba5e2af6c3f56cc71eb97ad21cbe6128e77c69dfb> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-tL8azGFXoGzXnOy7peKvbD9WzHHrl60hy-YSjnfGnfs.pb
    Sep 30, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 30, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 30, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 30, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 30, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-30_05_45_04-8137050605073411640?project=apache-beam-testing
    Sep 30, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-30_05_45_04-8137050605073411640
    Sep 30, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-30_05_45_04-8137050605073411640
    Sep 30, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-30T12:45:07.611Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 30, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:45:13.604Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 30, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:45:14.437Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 30, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:45:14.478Z: Expanding GroupByKey operations into optimizable parts.
    Sep 30, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:45:14.509Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 30, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:45:14.587Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 30, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:45:14.613Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 30, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:45:14.646Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 30, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:45:14.974Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 30, 2021 12:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:45:15.050Z: Starting 5 workers in us-central1-a...
    Sep 30, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:45:19.793Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 30, 2021 12:46:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:46:01.997Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 30, 2021 12:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:46:30.848Z: Workers have started successfully.
    Sep 30, 2021 12:46:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:46:30.871Z: Workers have started successfully.
    Sep 30, 2021 12:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:47:02.871Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 30, 2021 12:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:47:02.997Z: Cleaning up.
    Sep 30, 2021 12:47:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:47:03.090Z: Stopping worker pool...
    Sep 30, 2021 12:49:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:49:35.992Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 30, 2021 12:49:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T12:49:36.074Z: Worker pool stopped.
    Sep 30, 2021 12:49:43 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-30_05_45_04-8137050605073411640 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 972d64f7-d3b2-4841-8f57-1bb2826145c6 and timestamp: 2021-09-30T12:49:43.636000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.914

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 30, 2021 12:49:44 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 56.406 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 26s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/eaexy6eq2bbxy

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2487

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2487/display/redirect>

Changes:


------------------------------------------
[...truncated 337.79 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 42de1e1f803ec917230cdec58ad48498
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 30, 2021 6:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 30, 2021 6:44:52 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 30, 2021 6:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 30, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 6:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 30, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 6:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 30, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@613927329]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 30, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 30, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 30, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 30, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 30, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 30, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 30, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 30, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 30, 2021 6:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 30, 2021 6:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 30, 2021 6:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 30, 2021 6:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Sep 30, 2021 6:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2790166041164843000.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-mTB5mjFKQujtR6LcWttiPQ9brPwnAPx19bogbCgUijI.jar
    Sep 30, 2021 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 30, 2021 6:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 30, 2021 6:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104893 bytes, hash 8b044eaaacaa7e7f8aac2783740cb80a1c2e999f4b8ff427738710ecfb472750> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-iwROqqyqfn-KrCeDdAy4ChwumZ9Lj_Qnc4cQ7PtHJ1A.pb
    Sep 30, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 30, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 30, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 30, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 30, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-29_23_45_04-480439106660977662?project=apache-beam-testing
    Sep 30, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-29_23_45_04-480439106660977662
    Sep 30, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-29_23_45_04-480439106660977662
    Sep 30, 2021 6:45:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-30T06:45:08.114Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 30, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:45:13.883Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 30, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:45:14.778Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 30, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:45:14.820Z: Expanding GroupByKey operations into optimizable parts.
    Sep 30, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:45:14.844Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 30, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:45:14.893Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 30, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:45:14.911Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 30, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:45:14.932Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 30, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:45:15.268Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 30, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:45:15.337Z: Starting 5 workers in us-central1-a...
    Sep 30, 2021 6:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:45:36.225Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 30, 2021 6:46:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:46:00.035Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 30, 2021 6:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:46:26.000Z: Workers have started successfully.
    Sep 30, 2021 6:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:46:26.033Z: Workers have started successfully.
    Sep 30, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:46:53.177Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 30, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:46:53.319Z: Cleaning up.
    Sep 30, 2021 6:46:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:46:53.400Z: Stopping worker pool...
    Sep 30, 2021 6:49:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:49:23.738Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 30, 2021 6:49:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T06:49:23.783Z: Worker pool stopped.
    Sep 30, 2021 6:49:30 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-29_23_45_04-480439106660977662 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): cacf32ed-da08-4bcc-a773-ddf9035f0ba3 and timestamp: 2021-09-30T06:49:30.851000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.082

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 30, 2021 6:49:31 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 42.826 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 12s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/xelkpgsh6jara

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2486

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2486/display/redirect?page=changes>

Changes:

[kileysok] Update windmill state map to resolve *IfAbsent methods immediately

[piotr.szczepanik] [BEAM-12356] Fixed last non-cached usage of DatasetService in BigQuery

[noreply] [BEAM-12977] Translates Reshuffle in Portable Mode with Samza native

[noreply] [BEAM-12982] Help users debug which JvmInitializer is running and when.


------------------------------------------
[...truncated 344.50 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.
Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 42de1e1f803ec917230cdec58ad48498
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 30, 2021 12:46:03 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 30, 2021 12:46:04 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 30, 2021 12:46:05 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 30, 2021 12:46:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 12:46:07 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 12:46:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 30, 2021 12:46:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 12:46:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 12:46:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 30, 2021 12:46:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1413020570]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 30, 2021 12:46:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 30, 2021 12:46:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 12:46:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 30, 2021 12:46:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 30, 2021 12:46:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 12:46:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 30, 2021 12:46:10 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@210017548]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 30, 2021 12:46:10 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 12:46:10 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 12:46:10 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 30, 2021 12:46:10 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 30, 2021 12:46:10 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 30, 2021 12:46:10 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 30, 2021 12:46:10 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 30, 2021 12:46:10 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 30, 2021 12:46:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 30, 2021 12:46:13 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 30, 2021 12:46:14 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-d2T2Ww3PkGIpGazPxOGPmNWYNJy49hfJckZ3R5mrrWo.jar
    Sep 30, 2021 12:46:14 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test573253334719175251.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-lVVrtBnXYSgG1a2yKE-olbo9sFY3aO8mu-UE6wX2Qas.jar
    Sep 30, 2021 12:46:14 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 30, 2021 12:46:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 30, 2021 12:46:14 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash 2d97c0ed6cfde58fd8348c7ced6693d03bd8fa5d1dde615f54f569d976923356> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-LZfA7Wz95Y_YNIx87WaT0DvY-l0d3mFfVPVp2XaSM1Y.pb
    Sep 30, 2021 12:46:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 30, 2021 12:46:16 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 30, 2021 12:46:17 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 30, 2021 12:46:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 30, 2021 12:46:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-29_17_46_17-1488063317250134391?project=apache-beam-testing
    Sep 30, 2021 12:46:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-29_17_46_17-1488063317250134391
    Sep 30, 2021 12:46:17 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-29_17_46_17-1488063317250134391
    Sep 30, 2021 12:46:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-30T00:46:22.320Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 30, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:46:27.775Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 30, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:46:28.466Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 30, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:46:28.515Z: Expanding GroupByKey operations into optimizable parts.
    Sep 30, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:46:28.560Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 30, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:46:28.637Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 30, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:46:28.666Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 30, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:46:28.712Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 30, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:46:29.230Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 30, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:46:29.305Z: Starting 5 workers in us-central1-a...
    Sep 30, 2021 12:46:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:46:42.628Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 30, 2021 12:47:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:47:20.081Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 30, 2021 12:47:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:47:45.242Z: Workers have started successfully.
    Sep 30, 2021 12:47:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:47:45.269Z: Workers have started successfully.
    Sep 30, 2021 12:48:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:48:16.697Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 30, 2021 12:48:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:48:16.897Z: Cleaning up.
    Sep 30, 2021 12:48:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:48:16.985Z: Stopping worker pool...
    Sep 30, 2021 12:50:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:50:36.616Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 30, 2021 12:50:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-30T00:50:36.663Z: Worker pool stopped.
    Sep 30, 2021 12:50:41 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-29_17_46_17-1488063317250134391 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c877a94b-b4d5-42f1-aa89-e2bff151cd7b and timestamp: 2021-09-30T00:50:41.950000000Z:
                     Metric:                    Value:
                   read_time                     9.085
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 30, 2021 12:50:42 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 42.08 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 24s
152 actionable tasks: 102 executed, 50 from cache

Publishing build scan...
https://gradle.com/s/3fjnuyutmrfsg

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2485

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2485/display/redirect?page=changes>

Changes:

[samuelw] [BEAM-12942] Validate pubsub messages before they are published in

[clairem] [BEAM-12628] make useReflectApi default to true

[noreply] [BEAM-12856] Change hard-coded limits for reading from a UnboundedReader


------------------------------------------
[...truncated 359.71 KB...]
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 572a460eb19ebb7d6397f0db1a677cfc
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 11'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 11'
Successfully started process 'Gradle Test Executor 11'

Gradle Test Executor 11 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 29, 2021 6:58:14 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 29, 2021 6:58:16 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 29, 2021 6:58:17 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 29, 2021 6:58:46 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 6:58:46 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 6:58:54 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 29, 2021 6:58:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 6:58:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 6:58:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 29, 2021 6:58:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@769449421]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 29, 2021 6:59:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 29, 2021 6:59:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 6:59:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 29, 2021 6:59:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 29, 2021 6:59:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 6:59:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 29, 2021 6:59:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1710747748]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 29, 2021 6:59:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 6:59:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 6:59:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 29, 2021 6:59:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 6:59:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 6:59:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 29, 2021 6:59:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 29, 2021 6:59:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 29, 2021 6:59:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 29, 2021 6:59:47 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 29, 2021 6:59:47 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-JPP4cyCHQxn010diAGAefPjsl8-1s3Nta1xm7CweWOs.jar
    Sep 29, 2021 6:59:47 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2195400849734178326.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-E1qygH0ujm0NqN53mzUn7Qw1X8k6RhcpwSlfU35zayc.jar
    Sep 29, 2021 6:59:48 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 1 seconds
    Sep 29, 2021 6:59:48 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 29, 2021 6:59:48 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104889 bytes, hash a7f8094b559bc23ce8d5449d7564caa2e829b036fd58bdecd261207045f02f70> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-p_gJS1Wbwjzo1USddWTKougpsDb9WL3s0mEgcEXwL3A.pb
    Sep 29, 2021 6:59:51 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 29, 2021 6:59:51 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 29, 2021 6:59:51 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 29, 2021 6:59:51 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 29, 2021 6:59:52 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-29_11_59_51-1748228159735481124?project=apache-beam-testing
    Sep 29, 2021 6:59:52 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-29_11_59_51-1748228159735481124
    Sep 29, 2021 6:59:52 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-29_11_59_51-1748228159735481124
    Sep 29, 2021 6:59:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-29T18:59:55.000Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 29, 2021 7:01:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:01:28.347Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 29, 2021 7:01:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:01:31.107Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 29, 2021 7:01:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:01:31.933Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 29, 2021 7:01:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:01:32.070Z: Expanding GroupByKey operations into optimizable parts.
    Sep 29, 2021 7:01:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:01:32.134Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 29, 2021 7:01:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:01:32.275Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 29, 2021 7:01:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:01:32.333Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 29, 2021 7:01:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:01:32.400Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 29, 2021 7:01:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:01:33.863Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 29, 2021 7:01:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:01:34.065Z: Starting 5 workers in us-central1-c...
    Sep 29, 2021 7:02:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:02:08.491Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 29, 2021 7:02:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:02:08.561Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 29, 2021 7:02:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:02:18.897Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 29, 2021 7:02:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:02:58.535Z: Workers have started successfully.
    Sep 29, 2021 7:02:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:02:58.591Z: Workers have started successfully.
    Sep 29, 2021 7:03:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:03:30.747Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 29, 2021 7:03:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:03:31.457Z: Cleaning up.
    Sep 29, 2021 7:03:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:03:32.659Z: Stopping worker pool...
    Sep 29, 2021 7:05:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:05:46.159Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 29, 2021 7:05:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T19:05:46.340Z: Worker pool stopped.
    Sep 29, 2021 7:06:45 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-29_11_59_51-1748228159735481124 finished with status DONE.


Gradle Test Executor 11 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED
org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 61fc3acd-7f0a-4563-a089-8fbdfdf37816 and timestamp: 2021-09-29T19:06:45.080000000Z:
                     Metric:                    Value:
                   read_time                     7.666
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 29, 2021 7:06:45 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

3 tests completed, 2 failed
Finished generating test XML results (0.011 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.005 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 8 mins 47.008 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 21m 23s
152 actionable tasks: 109 executed, 43 from cache

Publishing build scan...
https://gradle.com/s/7d7qoff3k77au

Stopped 4 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2484

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2484/display/redirect>

Changes:


------------------------------------------
[...truncated 337.20 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8fac3e85e47cb465aca8eadd097ae6c8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 29, 2021 12:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 29, 2021 12:44:59 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 29, 2021 12:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 29, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 29, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 29, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1505334907]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 29, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 29, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 29, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 29, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 29, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 29, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 29, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 29, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 29, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 29, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 29, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 29, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-mkwBNRU4-IrDjMZ0CMdMym0ZilYRrhqnvMLb7Axcm90.jar
    Sep 29, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test657470287928516511.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-HWEZBcNCPETt_MjilxZjiWsuusvE0FuEl60l020V96U.jar
    Sep 29, 2021 12:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 29, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 29, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104892 bytes, hash 7bd744d3e6cebdec48e398023fe98c13ec19c56050b1522239d987dbffb1a4b9> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-e9dE0-bOvexI45gCP-mME-wZxWBQsVIiOdmH2_-xpLk.pb
    Sep 29, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 29, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 29, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 29, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 29, 2021 12:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-29_05_45_12-4445359840033513456?project=apache-beam-testing
    Sep 29, 2021 12:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-29_05_45_12-4445359840033513456
    Sep 29, 2021 12:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-29_05_45_12-4445359840033513456
    Sep 29, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-29T12:45:19.145Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 29, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:45:26.872Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 29, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:45:27.767Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 29, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:45:27.806Z: Expanding GroupByKey operations into optimizable parts.
    Sep 29, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:45:27.833Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 29, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:45:27.907Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 29, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:45:27.927Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 29, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:45:27.961Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 29, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:45:28.299Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 29, 2021 12:45:29 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:45:28.377Z: Starting 5 workers in us-central1-a...
    Sep 29, 2021 12:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:45:33.133Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 29, 2021 12:46:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:46:15.299Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 29, 2021 12:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:46:41.320Z: Workers have started successfully.
    Sep 29, 2021 12:46:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:46:41.353Z: Workers have started successfully.
    Sep 29, 2021 12:47:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:47:10.496Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 29, 2021 12:47:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:47:10.620Z: Cleaning up.
    Sep 29, 2021 12:47:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:47:10.706Z: Stopping worker pool...
    Sep 29, 2021 12:50:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:50:05.668Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 29, 2021 12:50:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T12:50:05.733Z: Worker pool stopped.
    Sep 29, 2021 12:50:16 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-29_05_45_12-4445359840033513456 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): a473a4e6-b787-4486-bdbc-c44588f3e576 and timestamp: 2021-09-29T12:50:16.855000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.501

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 29, 2021 12:50:17 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.189 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 5 mins 22.696 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 58s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/nclbmw3pccass

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2483

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2483/display/redirect>

Changes:


------------------------------------------
[...truncated 345.98 KB...]
Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8fac3e85e47cb465aca8eadd097ae6c8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 29, 2021 6:45:15 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 29, 2021 6:45:15 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 29, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 29, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 6:45:18 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 29, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 6:45:19 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 29, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 29, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 29, 2021 6:45:20 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 29, 2021 6:45:21 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 29, 2021 6:45:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 29, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 29, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-mkwBNRU4-IrDjMZ0CMdMym0ZilYRrhqnvMLb7Axcm90.jar
    Sep 29, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4767608592662490288.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-SPSuFqghUI4aME98kWQbAu6dIX6hLa538WUDFVieR24.jar
    Sep 29, 2021 6:45:25 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 29, 2021 6:45:26 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 29, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104890 bytes, hash f06cde5785855ef35d79f1cb8dd910372587f22dd2ff2260569ab8751217abbc> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-8GzeV4WFXvNdefHLjdkQNyWH8i3S_yJgVpq4dRIXq7w.pb
    Sep 29, 2021 6:45:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 29, 2021 6:45:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 29, 2021 6:45:28 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 29, 2021 6:45:28 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 29, 2021 6:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-28_23_45_28-6780201276708781993?project=apache-beam-testing
    Sep 29, 2021 6:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-28_23_45_28-6780201276708781993
    Sep 29, 2021 6:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-28_23_45_28-6780201276708781993
    Sep 29, 2021 6:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-29T06:45:31.977Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 29, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:45:40.972Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 29, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:45:41.663Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 29, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:45:41.703Z: Expanding GroupByKey operations into optimizable parts.
    Sep 29, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:45:41.730Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 29, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:45:41.807Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 29, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:45:41.838Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 29, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:45:41.867Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 29, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:45:42.247Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 29, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:45:42.316Z: Starting 5 workers in us-central1-c...
    Sep 29, 2021 6:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:45:50.742Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 29, 2021 6:46:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:46:17.907Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 29, 2021 6:46:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:46:17.945Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 29, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:46:28.179Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 29, 2021 6:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:46:55.358Z: Workers have started successfully.
    Sep 29, 2021 6:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:46:55.407Z: Workers have started successfully.
    Sep 29, 2021 6:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:47:26.008Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 29, 2021 6:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:47:26.193Z: Cleaning up.
    Sep 29, 2021 6:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:47:26.288Z: Stopping worker pool...
    Sep 29, 2021 6:49:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:49:48.018Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 29, 2021 6:49:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T06:49:48.079Z: Worker pool stopped.
    Sep 29, 2021 6:49:54 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-28_23_45_28-6780201276708781993 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 65a611b5-7c37-478c-b9ac-7cb85651a9ac and timestamp: 2021-09-29T06:49:54.463000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.573

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 29, 2021 6:49:54 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 43.423 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 37s
152 actionable tasks: 101 executed, 51 from cache

Publishing build scan...
https://gradle.com/s/y4mbdkxl577uk

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2482

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2482/display/redirect?page=changes>

Changes:

[noreply] [BEAM-11129] Add namespace and key to portable display data (#15564)

[noreply] [BEAM-12906] Add a `dataframe` extra for installing a pandas version


------------------------------------------
[...truncated 366.26 KB...]
Gradle Test Executor 7 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 8fac3e85e47cb465aca8eadd097ae6c8
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 7'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 7'
Successfully started process 'Gradle Test Executor 7'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 29, 2021 12:49:35 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 29, 2021 12:49:36 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 29, 2021 12:49:37 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 29, 2021 12:49:40 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 12:49:40 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 12:49:41 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 29, 2021 12:49:41 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 12:49:41 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 12:49:41 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 29, 2021 12:49:42 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1505334907]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 29, 2021 12:49:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 29, 2021 12:49:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 12:49:42 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 29, 2021 12:49:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 29, 2021 12:49:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 12:49:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 29, 2021 12:49:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 29, 2021 12:49:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 12:49:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 12:49:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 29, 2021 12:49:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 29, 2021 12:49:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 29, 2021 12:49:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 29, 2021 12:49:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 29, 2021 12:49:44 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 29, 2021 12:49:44 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 29, 2021 12:49:48 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 29, 2021 12:49:48 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-mkwBNRU4-IrDjMZ0CMdMym0ZilYRrhqnvMLb7Axcm90.jar
    Sep 29, 2021 12:49:48 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7083351408459195147.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-aJjNwohAk2chZZrWcDhxFA3fmi0rcx5ZcQijQ58Wfsw.jar
    Sep 29, 2021 12:49:49 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 29, 2021 12:49:49 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 29, 2021 12:49:49 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104891 bytes, hash c4c25d34dc802fe4e8a921f61e04eea39bf8537c0aa431f9465055ae43f49e96> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-xMJdNNyAL-ToqSH2HgTuo5v4U3wKpDH5RlBVrkP0npY.pb
    Sep 29, 2021 12:49:51 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 29, 2021 12:49:51 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 29, 2021 12:49:51 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 29, 2021 12:49:52 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 29, 2021 12:49:52 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-28_17_49_52-11142279470977090787?project=apache-beam-testing
    Sep 29, 2021 12:49:52 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-28_17_49_52-11142279470977090787
    Sep 29, 2021 12:49:52 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-28_17_49_52-11142279470977090787
    Sep 29, 2021 12:49:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-29T00:49:55.374Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 29, 2021 12:50:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:01.028Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 29, 2021 12:50:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:01.916Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 29, 2021 12:50:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:01.966Z: Expanding GroupByKey operations into optimizable parts.
    Sep 29, 2021 12:50:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:02.016Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 29, 2021 12:50:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:02.115Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 29, 2021 12:50:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:02.165Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 29, 2021 12:50:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:02.202Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 29, 2021 12:50:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:02.641Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 29, 2021 12:50:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:02.764Z: Starting 5 workers in us-central1-a...
    Sep 29, 2021 12:50:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:18.178Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 29, 2021 12:50:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:39.435Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Sep 29, 2021 12:50:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:39.469Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Sep 29, 2021 12:50:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:50:49.832Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 29, 2021 12:51:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:51:14.831Z: Workers have started successfully.
    Sep 29, 2021 12:51:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:51:14.871Z: Workers have started successfully.
    Sep 29, 2021 12:51:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:51:43.695Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 29, 2021 12:51:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:51:43.827Z: Cleaning up.
    Sep 29, 2021 12:51:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:51:43.894Z: Stopping worker pool...
    Sep 29, 2021 12:54:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:54:10.501Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 29, 2021 12:54:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-29T00:54:10.549Z: Worker pool stopped.
    Sep 29, 2021 12:54:16 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-28_17_49_52-11142279470977090787 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 927253f6-e483-4a48-876a-0686275b0456 and timestamp: 2021-09-29T00:54:16.280000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.284

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 29, 2021 12:54:16 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 7 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.042 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.052 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 46.545 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 9m 52s
152 actionable tasks: 115 executed, 37 from cache

Publishing build scan...
https://gradle.com/s/gsmiwhly6qg4o

Stopped 6 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2481

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2481/display/redirect?page=changes>

Changes:

[chamikaramj] Sets the correct coder in 'ConstantTableDestinations' when using BQ

[Luke Cwik] [BEAM-12974] Migrate off of deprecated package for MockitoJUnitRunner

[noreply] [BEAM-12593] Verify DataFrame API on pandas 1.3 (with container update)

[noreply] [BEAM-12973] Print Go Test and Script info. (#15604)

[noreply] [BEAM-10913] - Installing and persisting Grafana plugin in kubernetes


------------------------------------------
[...truncated 341.27 KB...]
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 957a10a4d509128460557542c6f8bab1
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 28, 2021 6:52:32 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 28, 2021 6:52:33 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 28, 2021 6:52:33 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 28, 2021 6:52:36 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 6:52:36 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 6:52:37 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 28, 2021 6:52:37 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 6:52:37 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 6:52:38 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 28, 2021 6:52:38 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 28, 2021 6:52:38 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 28, 2021 6:52:38 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 6:52:38 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 28, 2021 6:52:38 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 28, 2021 6:52:38 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 6:52:38 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 28, 2021 6:52:39 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 28, 2021 6:52:39 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 6:52:39 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 6:52:39 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 28, 2021 6:52:39 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 6:52:39 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 6:52:39 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 28, 2021 6:52:39 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 28, 2021 6:52:39 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 28, 2021 6:52:40 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 28, 2021 6:52:43 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 28, 2021 6:52:43 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 28, 2021 6:52:43 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test489072803952744924.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-uQraVB0IQDQ2YOEU5iWVuptBhQeg6Zsso9-RMi6dsZM.jar
    Sep 28, 2021 6:52:44 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 28, 2021 6:52:44 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 28, 2021 6:52:44 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 228eacc7913c31784768f960c3e984e786c6ca5e5acdfaa1f0ea31ce84cc06eb> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Io6sx5E8MXhHaPlgw-mE54bGyl5azfqh8OoxzoTMBus.pb
    Sep 28, 2021 6:52:46 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 28, 2021 6:52:46 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 28, 2021 6:52:46 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 28, 2021 6:52:46 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 28, 2021 6:52:48 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-28_11_52_46-17894663093463120524?project=apache-beam-testing
    Sep 28, 2021 6:52:48 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-28_11_52_46-17894663093463120524
    Sep 28, 2021 6:52:48 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-28_11_52_46-17894663093463120524
    Sep 28, 2021 6:52:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-28T18:52:50.593Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 28, 2021 6:52:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:52:58.642Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 28, 2021 6:52:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:52:59.411Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 28, 2021 6:52:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:52:59.547Z: Expanding GroupByKey operations into optimizable parts.
    Sep 28, 2021 6:52:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:52:59.595Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 28, 2021 6:52:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:52:59.752Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 28, 2021 6:53:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:52:59.832Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 28, 2021 6:53:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:52:59.892Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 28, 2021 6:53:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:53:00.634Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 28, 2021 6:53:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:53:00.777Z: Starting 5 workers in us-central1-c...
    Sep 28, 2021 6:53:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:53:14.245Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 28, 2021 6:53:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:53:33.812Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 28, 2021 6:53:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:53:33.843Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 28, 2021 6:53:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:53:44.087Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 28, 2021 6:54:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:54:10.136Z: Workers have started successfully.
    Sep 28, 2021 6:54:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:54:10.195Z: Workers have started successfully.
    Sep 28, 2021 6:54:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:54:49.884Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 28, 2021 6:54:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:54:50.184Z: Cleaning up.
    Sep 28, 2021 6:54:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:54:50.437Z: Stopping worker pool...
    Sep 28, 2021 6:57:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:57:09.461Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 28, 2021 6:57:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T18:57:09.527Z: Worker pool stopped.
    Sep 28, 2021 6:57:15 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-28_11_52_46-17894663093463120524 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 2161815d-114f-4e1c-8fa6-f57c56f413fd and timestamp: 2021-09-28T18:57:15.777000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                    17.656

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 28, 2021 6:57:16 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 47.341 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 36s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/lrmzmpon2amzc

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2480

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2480/display/redirect>

Changes:


------------------------------------------
[...truncated 339.67 KB...]
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ad194d210d40413101015464770bfca6
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 28, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 28, 2021 12:44:56 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 28, 2021 12:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 28, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 28, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 28, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1505334907]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 28, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 28, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 28, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 28, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 28, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 28, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 28, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6972548575975067231.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-6qcaWwe3GKP3EJBBpRszHeC07YU6vdlHz0z8wMhayPc.jar
    Sep 28, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 28, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 28, 2021 12:45:07 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103998 bytes, hash e037fa70c507fa1cc6c773c71bec1353a5da935c79caefa00b24b7e6d8155b55> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-4Df6cMUH-hzGx3PHG-wTU6Xak1x5yu-gCyS35tgVW1U.pb
    Sep 28, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 28, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 28, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 28, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 28, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-28_05_45_10-332958398704530790?project=apache-beam-testing
    Sep 28, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-28_05_45_10-332958398704530790
    Sep 28, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-28_05_45_10-332958398704530790
    Sep 28, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-28T12:45:13.593Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 28, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:45:22.446Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 28, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:45:23.184Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 28, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:45:23.212Z: Expanding GroupByKey operations into optimizable parts.
    Sep 28, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:45:23.248Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 28, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:45:23.336Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 28, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:45:23.359Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 28, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:45:23.383Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 28, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:45:23.681Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 28, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:45:23.752Z: Starting 5 workers in us-central1-c...
    Sep 28, 2021 12:45:56 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:45:56.297Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 28, 2021 12:45:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:45:58.106Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 28, 2021 12:45:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:45:58.151Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 28, 2021 12:46:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:46:08.368Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 28, 2021 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:46:31.538Z: Workers have started successfully.
    Sep 28, 2021 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:46:31.574Z: Workers have started successfully.
    Sep 28, 2021 12:47:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:47:12.136Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 28, 2021 12:47:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:47:12.300Z: Cleaning up.
    Sep 28, 2021 12:47:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:47:12.394Z: Stopping worker pool...
    Sep 28, 2021 12:49:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:49:31.463Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 28, 2021 12:49:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T12:49:31.522Z: Worker pool stopped.
    Sep 28, 2021 12:49:40 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-28_05_45_10-332958398704530790 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 07445f19-266b-4e41-bda7-1f1d29bf5cc3 and timestamp: 2021-09-28T12:49:41.001000000Z:
                     Metric:                    Value:
                   read_time                    17.122
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 28, 2021 12:49:41 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 50.158 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 22s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/dni5fye7dagum

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2479

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2479/display/redirect?page=changes>

Changes:

[noreply] [BEAM-12611] Capture a greater proporition of logs associated to an


------------------------------------------
[...truncated 336.99 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ad194d210d40413101015464770bfca6
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 28, 2021 6:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 28, 2021 6:44:57 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 28, 2021 6:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 28, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 28, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 28, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 28, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 28, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 28, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 28, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 28, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 28, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 28, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 28, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 28, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 28, 2021 6:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 28, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 28, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 28, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8826700596754350547.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-YGzE0tq0cHi5e33soYf-VTEG1yaLQCfKEhpRJW5tNqs.jar
    Sep 28, 2021 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 28, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 28, 2021 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103996 bytes, hash 3ecb1a7f956a4f472e94d873bf27191d68a0e2769c96f1a74f3f42797f81c4b0> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Pssaf5VqT0culNhzvycZHWig4naclvGnTz9CeX-BxLA.pb
    Sep 28, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 28, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 28, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 28, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 28, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-27_23_45_10-16000690830406534922?project=apache-beam-testing
    Sep 28, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-27_23_45_10-16000690830406534922
    Sep 28, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-27_23_45_10-16000690830406534922
    Sep 28, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-28T06:45:14.080Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 28, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:45:19.547Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 28, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:45:20.204Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 28, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:45:20.262Z: Expanding GroupByKey operations into optimizable parts.
    Sep 28, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:45:20.295Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 28, 2021 6:45:20 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:45:20.357Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 28, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:45:20.385Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 28, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:45:20.419Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 28, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:45:20.778Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 28, 2021 6:45:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:45:20.859Z: Starting 5 workers in us-central1-a...
    Sep 28, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:45:48.174Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 28, 2021 6:46:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:46:06.583Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 28, 2021 6:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:46:33.200Z: Workers have started successfully.
    Sep 28, 2021 6:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:46:33.233Z: Workers have started successfully.
    Sep 28, 2021 6:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:46:59.099Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 28, 2021 6:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:46:59.284Z: Cleaning up.
    Sep 28, 2021 6:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:46:59.356Z: Stopping worker pool...
    Sep 28, 2021 6:49:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:49:26.083Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 28, 2021 6:49:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T06:49:26.165Z: Worker pool stopped.
    Sep 28, 2021 6:49:31 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-27_23_45_10-16000690830406534922 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 73e6f619-a8a0-44a6-bc65-affb1c64ee5b and timestamp: 2021-09-28T06:49:31.629000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.836

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 28, 2021 6:49:32 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 39.841 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 12s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/mwtdigdmrz2bm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2478

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2478/display/redirect?page=changes>

Changes:

[Andrew Pilloud] [BEAM-12691] FieldAccessDescriptor for BeamCalcRel

[aydar.zaynutdinov] Init basic go backend structure

[aydar.zaynutdinov] Add .gitkeep file as an exclusion

[yileiyang] Remove encoding= from the json.loads call.

[noreply] [BEAM-12798] Add configurable combiner packing limit (#15391)

[noreply] [BEAM-12926] Translates Reshuffle with Samza native repartition operator

[noreply] [BEAM-11007] Allow nbconvert 6.x (#15595)

[noreply] [BEAM-12945] Fix crashes when importing the DataFrame API with pandas


------------------------------------------
[...truncated 340.88 KB...]
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ad194d210d40413101015464770bfca6
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 28, 2021 12:46:08 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 28, 2021 12:46:08 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 28, 2021 12:46:09 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 28, 2021 12:46:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 12:46:12 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 12:46:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 28, 2021 12:46:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 12:46:13 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 12:46:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 28, 2021 12:46:13 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1505334907]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 28, 2021 12:46:14 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 28, 2021 12:46:15 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 28, 2021 12:46:15 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 28, 2021 12:46:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 28, 2021 12:46:18 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 28, 2021 12:46:18 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 28, 2021 12:46:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT-tests-__o_cT2WEL8WknZtwTW8aQevksYt5UgSYWmasm174nQ.jar
    Sep 28, 2021 12:46:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3858642203605244807.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-LwrHOCGbgC2ue4bcblknD22fEXbokBbZBKLgLWHT-m4.jar
    Sep 28, 2021 12:46:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT-ZgoREHN3SqOQclIGldjYWlJZYlnicfJ-w-LJBzxpP7U.jar
    Sep 28, 2021 12:46:18 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 3 files newly uploaded in 0 seconds
    Sep 28, 2021 12:46:18 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 28, 2021 12:46:18 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103996 bytes, hash 56ec63180b31a16f47e66aca7425904f421affcf7986463dd5879a4f338982cc> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-VuxjGAsxoW9H5mrKdCWQT0Ia_895hkY91YeaTzOJgsw.pb
    Sep 28, 2021 12:46:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 28, 2021 12:46:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 28, 2021 12:46:21 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 28, 2021 12:46:21 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 28, 2021 12:46:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-27_17_46_21-4927795710480121675?project=apache-beam-testing
    Sep 28, 2021 12:46:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-27_17_46_21-4927795710480121675
    Sep 28, 2021 12:46:22 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-27_17_46_21-4927795710480121675
    Sep 28, 2021 12:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-28T00:46:24.808Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 28, 2021 12:46:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:46:31.991Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 28, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:46:32.740Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 28, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:46:32.790Z: Expanding GroupByKey operations into optimizable parts.
    Sep 28, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:46:32.826Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 28, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:46:32.900Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 28, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:46:32.937Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 28, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:46:32.971Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 28, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:46:33.340Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 28, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:46:33.417Z: Starting 5 workers in us-central1-a...
    Sep 28, 2021 12:47:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:47:02.860Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 28, 2021 12:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:47:25.907Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 28, 2021 12:47:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:47:50.298Z: Workers have started successfully.
    Sep 28, 2021 12:47:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:47:50.326Z: Workers have started successfully.
    Sep 28, 2021 12:48:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:48:21.799Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 28, 2021 12:48:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:48:21.983Z: Cleaning up.
    Sep 28, 2021 12:48:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:48:22.120Z: Stopping worker pool...
    Sep 28, 2021 12:50:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:50:41.337Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 28, 2021 12:50:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-28T00:50:41.377Z: Worker pool stopped.
    Sep 28, 2021 12:50:48 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-27_17_46_21-4927795710480121675 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 76715fd9-b7c5-4c3e-86cb-1bbaf407eead and timestamp: 2021-09-28T00:50:48.165000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.473

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 28, 2021 12:50:48 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 44.083 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 31s
152 actionable tasks: 99 executed, 53 from cache

Publishing build scan...
https://gradle.com/s/oexuvqidchaq4

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2477

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2477/display/redirect?page=changes>

Changes:

[noreply] Update container tags used by unreleased SDKs.


------------------------------------------
[...truncated 338.48 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 83e0aceefc3fd4863bf7ab55c5815815
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 27, 2021 6:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 27, 2021 6:44:58 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 27, 2021 6:44:58 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 27, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 27, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 27, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 27, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 27, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 27, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 27, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 27, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 27, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8936137984933033682.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-KvPdtXhCZeNI55k6veRk58SDRj4ngDPkdSqPZi2Vx4s.jar
    Sep 27, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 27, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 27, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 123b1db6c7af88c71d94372c202a15e7a05d02060ba54f5477693084babb7769> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-EjsdtseviMcdlDcsICoV56BdAgYLpU9Ud2kwhLq7d2k.pb
    Sep 27, 2021 6:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 27, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 27, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 27, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 27, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-27_11_45_12-10521807972155577567?project=apache-beam-testing
    Sep 27, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-27_11_45_12-10521807972155577567
    Sep 27, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-27_11_45_12-10521807972155577567
    Sep 27, 2021 6:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-27T18:45:15.623Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 27, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:45:24.780Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 27, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:45:25.535Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 27, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:45:25.572Z: Expanding GroupByKey operations into optimizable parts.
    Sep 27, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:45:25.597Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 27, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:45:25.671Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 27, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:45:25.709Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 27, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:45:25.743Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 27, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:45:26.200Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 27, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:45:26.275Z: Starting 5 workers in us-central1-c...
    Sep 27, 2021 6:45:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:45:32.104Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 27, 2021 6:46:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:46:00.948Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Sep 27, 2021 6:46:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:46:00.984Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Sep 27, 2021 6:46:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:46:11.216Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 27, 2021 6:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:46:36.254Z: Workers have started successfully.
    Sep 27, 2021 6:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:46:36.286Z: Workers have started successfully.
    Sep 27, 2021 6:47:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:47:16.301Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 27, 2021 6:47:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:47:16.445Z: Cleaning up.
    Sep 27, 2021 6:47:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:47:16.590Z: Stopping worker pool...
    Sep 27, 2021 6:49:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:49:33.041Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 27, 2021 6:49:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T18:49:33.092Z: Worker pool stopped.
    Sep 27, 2021 6:49:42 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-27_11_45_12-10521807972155577567 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): dda3b7d8-2d15-47c7-9de5-38cd1c09abee and timestamp: 2021-09-27T18:49:42.444000000Z:
                     Metric:                    Value:
                   read_time                    14.444
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 27, 2021 6:49:42 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.035 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 49.112 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 23s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/7k7nv6g4i2enk

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2476

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2476/display/redirect>

Changes:


------------------------------------------
[...truncated 338.34 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 83e0aceefc3fd4863bf7ab55c5815815
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 27, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 27, 2021 12:44:56 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 27, 2021 12:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 27, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 27, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 27, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 27, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 27, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 27, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 27, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 27, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 27, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1698953046670420008.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-MW8u-3tPmTyH5yoyJtew4u-wYz3XrKQaqwvhvgeXgkE.jar
    Sep 27, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 2 seconds
    Sep 27, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 27, 2021 12:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103996 bytes, hash 89dd6f105e7b3b8391359e6ae9e4933451c16ae68aa39ed85bc6c82e400c2d37> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-id1vEF57O4ORNZ5q6eSTNFHBauaKo57YW8bILkAMLTc.pb
    Sep 27, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 27, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 27, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 27, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 27, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-27_05_45_11-16125404393212011150?project=apache-beam-testing
    Sep 27, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-27_05_45_11-16125404393212011150
    Sep 27, 2021 12:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-27_05_45_11-16125404393212011150
    Sep 27, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-27T12:45:14.510Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 27, 2021 12:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:45:21.432Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 27, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:45:22.218Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 27, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:45:22.269Z: Expanding GroupByKey operations into optimizable parts.
    Sep 27, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:45:22.303Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 27, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:45:22.364Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 27, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:45:22.395Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 27, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:45:22.423Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 27, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:45:22.810Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 27, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:45:22.876Z: Starting 5 workers in us-central1-c...
    Sep 27, 2021 12:45:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:45:34.089Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 27, 2021 12:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:45:59.410Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Sep 27, 2021 12:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:45:59.455Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Sep 27, 2021 12:46:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:46:09.735Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 27, 2021 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:46:32.976Z: Workers have started successfully.
    Sep 27, 2021 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:46:33.009Z: Workers have started successfully.
    Sep 27, 2021 12:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:47:03.538Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 27, 2021 12:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:47:03.697Z: Cleaning up.
    Sep 27, 2021 12:47:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:47:03.777Z: Stopping worker pool...
    Sep 27, 2021 12:49:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:49:27.387Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 27, 2021 12:49:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T12:49:27.436Z: Worker pool stopped.
    Sep 27, 2021 12:49:34 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-27_05_45_11-16125404393212011150 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 867e641a-db97-4a36-a5de-ea3285f71f93 and timestamp: 2021-09-27T12:49:34.956000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     10.04

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 27, 2021 12:49:35 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 43.378 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 17s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/p7bm6iey2pxim

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2475

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2475/display/redirect>

Changes:


------------------------------------------
[...truncated 339.41 KB...]
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 83e0aceefc3fd4863bf7ab55c5815815
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 27, 2021 6:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 27, 2021 6:44:59 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 27, 2021 6:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 27, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 27, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 27, 2021 6:45:07 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 27, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 27, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 27, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 27, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 27, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 27, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 6:45:08 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 27, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 27, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 27, 2021 6:45:09 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 27, 2021 6:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 27, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 27, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 27, 2021 6:45:14 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test606067313956805319.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-XVMyk8-627vyBINKnF0d9O_TbZjHcRUaLr5SgEpkflE.jar
    Sep 27, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 1 seconds
    Sep 27, 2021 6:45:16 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 27, 2021 6:45:16 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash a9bb741af712aaca2d27e2bff84804dd533a5921550fec01bb003874fa05bf5d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-qbt0GvcSqsotJ-K_-EgE3VM6WSFVD-wBuwA4dPoFv10.pb
    Sep 27, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 27, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 27, 2021 6:45:18 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 27, 2021 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 27, 2021 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-26_23_45_19-9658409184207848229?project=apache-beam-testing
    Sep 27, 2021 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-26_23_45_19-9658409184207848229
    Sep 27, 2021 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-26_23_45_19-9658409184207848229
    Sep 27, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-27T06:45:22.389Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 27, 2021 6:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:45:34.172Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 27, 2021 6:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:45:35.036Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 27, 2021 6:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:45:35.075Z: Expanding GroupByKey operations into optimizable parts.
    Sep 27, 2021 6:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:45:35.102Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 27, 2021 6:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:45:35.174Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 27, 2021 6:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:45:35.200Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 27, 2021 6:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:45:35.233Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 27, 2021 6:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:45:35.529Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 27, 2021 6:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:45:35.586Z: Starting 5 workers in us-central1-c...
    Sep 27, 2021 6:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:45:43.170Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 27, 2021 6:46:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:46:21.095Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 27, 2021 6:46:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:46:47.019Z: Workers have started successfully.
    Sep 27, 2021 6:46:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:46:47.055Z: Workers have started successfully.
    Sep 27, 2021 6:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:47:17.274Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 27, 2021 6:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:47:17.408Z: Cleaning up.
    Sep 27, 2021 6:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:47:17.484Z: Stopping worker pool...
    Sep 27, 2021 6:49:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:49:39.507Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 27, 2021 6:49:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T06:49:39.537Z: Worker pool stopped.
    Sep 27, 2021 6:49:45 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-26_23_45_19-9658409184207848229 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 3abfee88-11a2-484c-ba29-cddc2e21f9e2 and timestamp: 2021-09-27T06:49:45.464000000Z:
                     Metric:                    Value:
                   read_time                      8.58
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 27, 2021 6:49:46 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 54.179 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 28s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/cqexpwyazocci

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2474

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2474/display/redirect>

Changes:


------------------------------------------
[...truncated 337.45 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 83e0aceefc3fd4863bf7ab55c5815815
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 27, 2021 12:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 27, 2021 12:44:57 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 27, 2021 12:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 27, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 12:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 27, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 27, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 27, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 27, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 27, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 27, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 27, 2021 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 27, 2021 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 27, 2021 12:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5895972536111496274.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-61IUw3jGh0-VUqGy5GTi1tycqW1iq3FKeI23tM-4hkY.jar
    Sep 27, 2021 12:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 27, 2021 12:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 27, 2021 12:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash f0711782d78e76ed34f7fa55c40211b2985f0b8145a086e1f24abe7d8dfe5834> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-8HEXgteOdu009_pVxAIRsphfC4FFoIbh8kq-fY3-WDQ.pb
    Sep 27, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 27, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 27, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 27, 2021 12:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 27, 2021 12:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-26_17_45_10-9883298268261907232?project=apache-beam-testing
    Sep 27, 2021 12:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-26_17_45_10-9883298268261907232
    Sep 27, 2021 12:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-26_17_45_10-9883298268261907232
    Sep 27, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-27T00:45:14.085Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 27, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:45:20.279Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 27, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:45:20.996Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 27, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:45:21.058Z: Expanding GroupByKey operations into optimizable parts.
    Sep 27, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:45:21.088Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 27, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:45:21.162Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 27, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:45:21.188Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 27, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:45:21.215Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 27, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:45:21.614Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 27, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:45:21.693Z: Starting 5 workers in us-central1-c...
    Sep 27, 2021 12:45:53 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:45:51.156Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 27, 2021 12:46:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:46:01.729Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 27, 2021 12:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:46:30.803Z: Workers have started successfully.
    Sep 27, 2021 12:46:33 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:46:30.823Z: Workers have started successfully.
    Sep 27, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:47:00.290Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 27, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:47:00.426Z: Cleaning up.
    Sep 27, 2021 12:47:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:47:00.526Z: Stopping worker pool...
    Sep 27, 2021 12:49:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:49:13.308Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 27, 2021 12:49:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-27T00:49:13.346Z: Worker pool stopped.
    Sep 27, 2021 12:49:19 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-26_17_45_10-9883298268261907232 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 78774a8c-e473-4256-a112-56fad869f6c0 and timestamp: 2021-09-27T00:49:19.665000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.946

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 27, 2021 12:49:20 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.03 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 28.152 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 2s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/yt3emrpzmp5cc

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2473

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2473/display/redirect>

Changes:


------------------------------------------
[...truncated 339.99 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 83e0aceefc3fd4863bf7ab55c5815815
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 26, 2021 6:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 26, 2021 6:45:00 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 26, 2021 6:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 26, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 26, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 26, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 26, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 26, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 26, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 26, 2021 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 26, 2021 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 26, 2021 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5132010080650681199.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-cqTi9NX-l9vIjrtS_yl5wS3CrksSOoMGbycT8bqs4l4.jar
    Sep 26, 2021 6:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.thrift/libthrift/0.14.1/85348a0c44c298bbec5ae747e67ae12e60b3aef6/libthrift-0.14.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/libthrift-0.14.1-WzUQ_nLm8HJeKc7269seq6zMxp15_E7VC2gWAKh2Z-w.jar
    Sep 26, 2021 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.tomcat/tomcat-annotations-api/8.5.46/56c67699de192c603afd6f029e80e5ff8d98e7e9/tomcat-annotations-api-8.5.46.jar to gs://temp-storage-for-perf-tests/loadtests/staging/tomcat-annotations-api-8.5.46-amtG0OaVhkRRTAyjZYs7B-YSOmgqIO4203lSQnNfq8M.jar
    Sep 26, 2021 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/8.5.46/5d686394334d143f48251827435ab086a161e75e/tomcat-embed-core-8.5.46.jar to gs://temp-storage-for-perf-tests/loadtests/staging/tomcat-embed-core-8.5.46-vl-FREjS7l1uADb-srT3ExYweaG2uaepdQjlWRetNcI.jar
    Sep 26, 2021 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 246 files cached, 4 files newly uploaded in 0 seconds
    Sep 26, 2021 6:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 26, 2021 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103998 bytes, hash 5e3dbc930ebc167e27f19d0d5ba381333c2fb8916345d839a2d556ca42f72583> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Xj28kw68Fn4n8Z0NW6OBMzwvuJFjRdg5otVWykL3JYM.pb
    Sep 26, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 26, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 26, 2021 6:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 26, 2021 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 26, 2021 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-26_11_45_14-11136377410489573887?project=apache-beam-testing
    Sep 26, 2021 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-26_11_45_14-11136377410489573887
    Sep 26, 2021 6:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-26_11_45_14-11136377410489573887
    Sep 26, 2021 6:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-26T18:45:17.364Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 26, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:45:22.292Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 26, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:45:22.874Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 26, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:45:22.905Z: Expanding GroupByKey operations into optimizable parts.
    Sep 26, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:45:22.933Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 26, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:45:23.005Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 26, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:45:23.030Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 26, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:45:23.052Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 26, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:45:23.332Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 26, 2021 6:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:45:23.396Z: Starting 5 workers in us-central1-a...
    Sep 26, 2021 6:45:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:45:31.122Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 26, 2021 6:46:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:46:07.181Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 26, 2021 6:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:46:33.881Z: Workers have started successfully.
    Sep 26, 2021 6:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:46:33.914Z: Workers have started successfully.
    Sep 26, 2021 6:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:47:03.208Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 26, 2021 6:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:47:03.355Z: Cleaning up.
    Sep 26, 2021 6:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:47:03.422Z: Stopping worker pool...
    Sep 26, 2021 6:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:49:22.658Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 26, 2021 6:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T18:49:22.703Z: Worker pool stopped.
    Sep 26, 2021 6:49:28 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-26_11_45_14-11136377410489573887 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 8eea85c1-1a26-469a-a8ac-1bddc55bf2a5 and timestamp: 2021-09-26T18:49:28.285000000Z:
                     Metric:                    Value:
                   read_time                     8.894
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 26, 2021 6:49:28 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 33.013 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 10s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/b44yzy35q7g6o

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2472

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2472/display/redirect>

Changes:


------------------------------------------
[...truncated 337.98 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 83e0aceefc3fd4863bf7ab55c5815815
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 26, 2021 12:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 26, 2021 12:44:58 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 26, 2021 12:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 26, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 12:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 26, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 26, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 26, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 26, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 26, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 26, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 26, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 26, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 26, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 26, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 26, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 26, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 26, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 26, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 26, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1254604070143763192.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-9-kMvil9kuVjtdOuQKHYscGOFYbYN5eesmOu_omgvz8.jar
    Sep 26, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 1 seconds
    Sep 26, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 26, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash af62ebf8274598151ff04eec2336c5b03bf73f2e28da7d1c241753ab8fef334b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-r2Lr-CdFmBUf8E7sIzbFsDv3Py4o2n0cJBdTq4_vM0s.pb
    Sep 26, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 26, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 26, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 26, 2021 12:45:13 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 26, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-26_05_45_14-9479678245125380563?project=apache-beam-testing
    Sep 26, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-26_05_45_14-9479678245125380563
    Sep 26, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-26_05_45_14-9479678245125380563
    Sep 26, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-26T12:45:17.295Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 26, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:45:24.383Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 26, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:45:25.143Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 26, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:45:25.193Z: Expanding GroupByKey operations into optimizable parts.
    Sep 26, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:45:25.211Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 26, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:45:25.271Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 26, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:45:25.291Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 26, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:45:25.322Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 26, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:45:25.669Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 26, 2021 12:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:45:25.745Z: Starting 5 workers in us-central1-c...
    Sep 26, 2021 12:45:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:45:49.475Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 26, 2021 12:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:45:59.787Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 26, 2021 12:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:45:59.809Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 26, 2021 12:46:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:46:10.027Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 26, 2021 12:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:46:36.856Z: Workers have started successfully.
    Sep 26, 2021 12:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:46:36.898Z: Workers have started successfully.
    Sep 26, 2021 12:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:47:06.020Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 26, 2021 12:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:47:06.176Z: Cleaning up.
    Sep 26, 2021 12:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:47:06.257Z: Stopping worker pool...
    Sep 26, 2021 12:49:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:49:26.413Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 26, 2021 12:49:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T12:49:26.477Z: Worker pool stopped.
    Sep 26, 2021 12:49:31 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-26_05_45_14-9479678245125380563 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): a8b4c07a-fbda-4b8b-ac9f-7ec72f2fd5a1 and timestamp: 2021-09-26T12:49:31.921000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.881

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 26, 2021 12:49:33 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 40.88 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 15s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/misvg7v45ysku

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2471

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2471/display/redirect>

Changes:


------------------------------------------
[...truncated 338.44 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 83e0aceefc3fd4863bf7ab55c5815815
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 26, 2021 6:44:56 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 26, 2021 6:44:57 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 26, 2021 6:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 26, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 26, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 26, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1505334907]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 6:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 26, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 26, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 26, 2021 6:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 26, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 26, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 26, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1235604829244032045.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-XqowqaS49m2HdUlnmpXrWGy6UWTNhVhLxBOJFdOJcpw.jar
    Sep 26, 2021 6:45:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 26, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 26, 2021 6:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103996 bytes, hash 045abca4714f0fae801463ce12fdfc75f5a037f7c51a41734820e0019a4ec701> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-BFq8pHFPD66AFGPOEv38dfWgN_fFGkFzSCDgAZpOxwE.pb
    Sep 26, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 26, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 26, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 26, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 26, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-25_23_45_10-12170458182251827948?project=apache-beam-testing
    Sep 26, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-25_23_45_10-12170458182251827948
    Sep 26, 2021 6:45:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-25_23_45_10-12170458182251827948
    Sep 26, 2021 6:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-26T06:45:14.041Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 26, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:45:20.353Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 26, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:45:21.074Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 26, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:45:21.113Z: Expanding GroupByKey operations into optimizable parts.
    Sep 26, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:45:21.150Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 26, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:45:21.221Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 26, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:45:21.248Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 26, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:45:21.285Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 26, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:45:21.628Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 26, 2021 6:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:45:21.718Z: Starting 5 workers in us-central1-c...
    Sep 26, 2021 6:45:47 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:45:47.609Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 26, 2021 6:45:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:45:55.286Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Sep 26, 2021 6:45:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:45:55.319Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Sep 26, 2021 6:46:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:46:05.542Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 26, 2021 6:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:46:30.039Z: Workers have started successfully.
    Sep 26, 2021 6:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:46:30.079Z: Workers have started successfully.
    Sep 26, 2021 6:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:46:59.101Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 26, 2021 6:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:46:59.263Z: Cleaning up.
    Sep 26, 2021 6:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:46:59.367Z: Stopping worker pool...
    Sep 26, 2021 6:49:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:49:14.151Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 26, 2021 6:49:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T06:49:14.200Z: Worker pool stopped.
    Sep 26, 2021 6:49:21 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-25_23_45_10-12170458182251827948 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4558b16c-a64e-48ab-88ca-dceffb0d9a04 and timestamp: 2021-09-26T06:49:21.114000000Z:
                     Metric:                    Value:
                   read_time                     9.072
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 26, 2021 6:49:21 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 28.379 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 2s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/mf35lmvuobpb6

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2470

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2470/display/redirect>

Changes:


------------------------------------------
[...truncated 337.75 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 83e0aceefc3fd4863bf7ab55c5815815
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 26, 2021 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 26, 2021 12:44:53 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 26, 2021 12:44:54 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 26, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 26, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 26, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 26, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 26, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 26, 2021 12:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 26, 2021 12:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 26, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 26, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 26, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1834057475545864917.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-yWIMTLhmjPJLlhf9R_2BhWwV8Bx-Bjc22jrTxmwJgo8.jar
    Sep 26, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 26, 2021 12:45:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 26, 2021 12:45:03 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103998 bytes, hash 844be46f876ef98404bf3f0c87ed56fedf368f27be27326fa318dfc790479a0f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-hEvkb4du-YQEvz8Mh-1W_t82jye-JzJvoxjfx5BHmg8.pb
    Sep 26, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 26, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 26, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 26, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 26, 2021 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-25_17_45_06-6421286648442750099?project=apache-beam-testing
    Sep 26, 2021 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-25_17_45_06-6421286648442750099
    Sep 26, 2021 12:45:07 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-25_17_45_06-6421286648442750099
    Sep 26, 2021 12:45:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-26T00:45:09.573Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 26, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:45:16.083Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 26, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:45:16.818Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 26, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:45:16.847Z: Expanding GroupByKey operations into optimizable parts.
    Sep 26, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:45:16.878Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 26, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:45:16.948Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 26, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:45:16.975Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 26, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:45:17.008Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 26, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:45:17.534Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 26, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:45:17.607Z: Starting 5 workers in us-central1-a...
    Sep 26, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:45:26.661Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 26, 2021 12:46:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:46:03.561Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 26, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:46:30.032Z: Workers have started successfully.
    Sep 26, 2021 12:46:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:46:30.088Z: Workers have started successfully.
    Sep 26, 2021 12:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:46:58.907Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 26, 2021 12:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:46:59.050Z: Cleaning up.
    Sep 26, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:46:59.152Z: Stopping worker pool...
    Sep 26, 2021 12:49:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:49:20.479Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 26, 2021 12:49:22 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-26T00:49:20.526Z: Worker pool stopped.
    Sep 26, 2021 12:49:26 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-25_17_45_06-6421286648442750099 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 9d1ce8f9-c69b-49ac-8fd8-008a50fad271 and timestamp: 2021-09-26T00:49:26.375000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.087

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 26, 2021 12:49:26 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 37.561 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 9s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/72olwmfyztj2q

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2469

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2469/display/redirect>

Changes:


------------------------------------------
[...truncated 337.75 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 83e0aceefc3fd4863bf7ab55c5815815
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 25, 2021 6:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 25, 2021 6:44:55 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 25, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 25, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 25, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 25, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1505334907]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 25, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 25, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 25, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 25, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 25, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 25, 2021 6:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 25, 2021 6:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 25, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 25, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 25, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8238155918827975761.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-TRur8FMsi0thdlBAFaqG7pqS8oYSIwXO8ZKbm25MraY.jar
    Sep 25, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 25, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 25, 2021 6:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103998 bytes, hash ed7c385e0be0647a842284b68b592f1461157ec46f401fae8892fc0e12473cda> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-7Xw4XgvgZHqEIoS2i1kvFGEVfsRvQB-uiJL8DhJHPNo.pb
    Sep 25, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 25, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 25, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 25, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 25, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-25_11_45_08-4975878268342362171?project=apache-beam-testing
    Sep 25, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-25_11_45_08-4975878268342362171
    Sep 25, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-25_11_45_08-4975878268342362171
    Sep 25, 2021 6:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-25T18:45:12.177Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 25, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:45:17.856Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 25, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:45:18.532Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 25, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:45:18.602Z: Expanding GroupByKey operations into optimizable parts.
    Sep 25, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:45:18.631Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 25, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:45:18.695Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 25, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:45:18.718Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 25, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:45:18.742Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 25, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:45:19.039Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 25, 2021 6:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:45:19.120Z: Starting 5 workers in us-central1-a...
    Sep 25, 2021 6:45:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:45:40.300Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 25, 2021 6:46:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:46:08.797Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 25, 2021 6:46:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:46:33.495Z: Workers have started successfully.
    Sep 25, 2021 6:46:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:46:33.521Z: Workers have started successfully.
    Sep 25, 2021 6:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:47:06.030Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 25, 2021 6:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:47:06.176Z: Cleaning up.
    Sep 25, 2021 6:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:47:06.261Z: Stopping worker pool...
    Sep 25, 2021 6:49:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:49:31.848Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 25, 2021 6:49:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T18:49:31.886Z: Worker pool stopped.
    Sep 25, 2021 6:49:38 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-25_11_45_08-4975878268342362171 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 15db7ab9-ee8a-4504-a8a5-d08aa9191a95 and timestamp: 2021-09-25T18:49:38.775000000Z:
                     Metric:                    Value:
                   read_time                     8.479
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 25, 2021 6:49:39 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 48.54 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 20s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/enghqxrp46tki

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2468

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2468/display/redirect>

Changes:


------------------------------------------
[...truncated 343.89 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 83e0aceefc3fd4863bf7ab55c5815815
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 3'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 3'
Successfully started process 'Gradle Test Executor 3'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 25, 2021 12:45:20 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 25, 2021 12:45:20 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 25, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 25, 2021 12:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 12:45:23 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 12:45:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 25, 2021 12:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 12:45:25 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 12:45:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 25, 2021 12:45:25 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1505334907]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 12:45:26 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 25, 2021 12:45:27 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 25, 2021 12:45:27 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 25, 2021 12:45:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 25, 2021 12:45:32 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 25, 2021 12:45:32 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 25, 2021 12:45:32 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8001638972657287600.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-UvFX61zunvX7BuizZ3b4gxlqm0h6oZuwbc9Zdwvd_wQ.jar
    Sep 25, 2021 12:45:33 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 1 seconds
    Sep 25, 2021 12:45:33 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 25, 2021 12:45:33 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103998 bytes, hash 4c7c3cdddebf371f1d2b96d022fcabda988b617a94333afe45fdf50d96192006> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-THw83d6_Nx8dK5bQIvyr2piLYXqUMzr-Rf31DZYZIAY.pb
    Sep 25, 2021 12:45:36 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 25, 2021 12:45:36 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 25, 2021 12:45:36 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 25, 2021 12:45:36 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 25, 2021 12:45:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-25_05_45_36-15522118172883620923?project=apache-beam-testing
    Sep 25, 2021 12:45:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-25_05_45_36-15522118172883620923
    Sep 25, 2021 12:45:37 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-25_05_45_36-15522118172883620923
    Sep 25, 2021 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-25T12:45:39.660Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 25, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:45:44.766Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 25, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:45:45.482Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 25, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:45:45.545Z: Expanding GroupByKey operations into optimizable parts.
    Sep 25, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:45:45.573Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 25, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:45:45.637Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 25, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:45:45.662Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 25, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:45:45.693Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 25, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:45:46.020Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 25, 2021 12:45:46 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:45:46.085Z: Starting 5 workers in us-central1-a...
    Sep 25, 2021 12:46:00 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:45:58.575Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 25, 2021 12:46:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:46:31.503Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 25, 2021 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:46:56.523Z: Workers have started successfully.
    Sep 25, 2021 12:46:57 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:46:56.542Z: Workers have started successfully.
    Sep 25, 2021 12:47:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:47:24.373Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 25, 2021 12:47:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:47:24.492Z: Cleaning up.
    Sep 25, 2021 12:47:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:47:24.551Z: Stopping worker pool...
    Sep 25, 2021 12:49:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:49:45.782Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 25, 2021 12:49:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T12:49:45.825Z: Worker pool stopped.
    Sep 25, 2021 12:49:50 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-25_05_45_36-15522118172883620923 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4944706f-133e-4e6a-b59a-da64c29a8332 and timestamp: 2021-09-25T12:49:50.829000000Z:
                     Metric:                    Value:
                   read_time                      8.49
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 25, 2021 12:49:51 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.027 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 35.131 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 31s
152 actionable tasks: 101 executed, 51 from cache

Publishing build scan...
https://gradle.com/s/4euke2lgttdmw

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2467

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2467/display/redirect?page=changes>

Changes:

[noreply] [BEAM-11982] Java Spanner - Implement IO Request Count metrics (#15493)


------------------------------------------
[...truncated 352.65 KB...]
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 83e0aceefc3fd4863bf7ab55c5815815
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 4'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 4'
Successfully started process 'Gradle Test Executor 4'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 25, 2021 6:46:25 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 25, 2021 6:46:25 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 25, 2021 6:46:26 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 25, 2021 6:46:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 6:46:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 6:46:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 25, 2021 6:46:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 6:46:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 6:46:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 25, 2021 6:46:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1505334907]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 25, 2021 6:46:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 25, 2021 6:46:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 6:46:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 25, 2021 6:46:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 25, 2021 6:46:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 6:46:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 25, 2021 6:46:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 25, 2021 6:46:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 6:46:31 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 6:46:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 25, 2021 6:46:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 6:46:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 6:46:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 25, 2021 6:46:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 25, 2021 6:46:32 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 25, 2021 6:46:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 25, 2021 6:46:37 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 25, 2021 6:46:37 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-xiAgUZ3f8CgUmkhSZWRg_5Fz0oYmRVjZGHVW_XRaCdY.jar
    Sep 25, 2021 6:46:37 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2940449598606000992.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-v42ycm5qXd3fLwhQVOLmUpUDz8Xi7obUeUNvBkWeEqU.jar
    Sep 25, 2021 6:46:38 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 25, 2021 6:46:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 25, 2021 6:46:38 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103999 bytes, hash bfd1edf3246b3d51bbbe1f571961bc8b8ebff57fd77c7c51f34e16250e8a6cac> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-v9Ht8yRrPVG7vh9XGWG8i46_9X_XfHxR804WJQ6KbKw.pb
    Sep 25, 2021 6:46:40 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 25, 2021 6:46:40 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 25, 2021 6:46:40 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 25, 2021 6:46:40 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 25, 2021 6:46:41 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-24_23_46_40-77168588993866869?project=apache-beam-testing
    Sep 25, 2021 6:46:41 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-24_23_46_40-77168588993866869
    Sep 25, 2021 6:46:41 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-24_23_46_40-77168588993866869
    Sep 25, 2021 6:46:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-25T06:46:44.084Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 25, 2021 6:46:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:46:49.438Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 25, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:46:50.180Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 25, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:46:50.215Z: Expanding GroupByKey operations into optimizable parts.
    Sep 25, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:46:50.244Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 25, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:46:50.299Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 25, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:46:50.322Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 25, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:46:50.352Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 25, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:46:50.653Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 25, 2021 6:46:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:46:50.718Z: Starting 5 workers in us-central1-a...
    Sep 25, 2021 6:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:46:56.641Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 25, 2021 6:47:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:47:25.066Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 25, 2021 6:47:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:47:25.096Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 25, 2021 6:47:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:47:35.472Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Sep 25, 2021 6:47:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:47:35.496Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Sep 25, 2021 6:47:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:47:45.830Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 25, 2021 6:48:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:48:01.677Z: Workers have started successfully.
    Sep 25, 2021 6:48:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:48:01.702Z: Workers have started successfully.
    Sep 25, 2021 6:48:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:48:34.761Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 25, 2021 6:48:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:48:34.903Z: Cleaning up.
    Sep 25, 2021 6:48:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:48:34.975Z: Stopping worker pool...
    Sep 25, 2021 6:50:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:50:53.573Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 25, 2021 6:50:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T06:50:53.617Z: Worker pool stopped.
    Sep 25, 2021 6:50:58 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-24_23_46_40-77168588993866869 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d7b997e1-445c-410c-972d-ea7683bbeed8 and timestamp: 2021-09-25T06:50:58.986000000Z:
                     Metric:                    Value:
                   read_time                    13.598
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 25, 2021 6:50:59 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 4 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.21 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 38.103 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 37s
152 actionable tasks: 104 executed, 48 from cache

Publishing build scan...
https://gradle.com/s/ott5u3qmewgqc

Stopped 3 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2466

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2466/display/redirect?page=changes>

Changes:

[Robert Bradshaw] [BEAM-12934] Use environment capabilities to determine length prefixing.

[Brian Hulette] Add missing comma

[ryanthompson591] Update tensorflow to version 2.6.0

[zhoufek] [BEAM-9487] Fix incorrected Repeatedly.may_finish implementation

[noreply] Cleanup use of futures. (#15043)

[noreply] Go Lint fix for wordcount and metrics (#15580)

[Brian Hulette] Fix whitespace lint

[noreply] [BEAM-3304] Snippets for trigger in BPG (#15409)

[noreply] [BEAM-12832] Add Go SDK xlang info to programming guide. (#15447)

[noreply] [BEAM-11097] Add SideInputCache to StateReader (#15563)

[Robert Bradshaw] Revert "Avoid apiary submission of job graph when it is not needed.

[noreply] [BEAM-12769] Java-emulating external transform. (#15546)

[noreply] [BEAM-12913] Pass query priority from ReadAllFromBigQuery (#15584)


------------------------------------------
[...truncated 338.28 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 8 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c75df4ad0b4326aabbad29be2039d846
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 8'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 8'
Successfully started process 'Gradle Test Executor 8'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 25, 2021 12:44:38 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 25, 2021 12:44:39 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 25, 2021 12:44:40 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 25, 2021 12:44:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 12:44:42 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 25, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 12:44:43 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 12:44:44 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 25, 2021 12:44:44 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 25, 2021 12:44:45 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 25, 2021 12:44:46 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 25, 2021 12:44:50 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 25, 2021 12:44:50 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-iWELRk0J0eqj9_ooQn6KLmU0Q20eazTA8el4stWEfXY.jar
    Sep 25, 2021 12:44:50 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7309957586490269237.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-_WnyG8sLTgAp6Br9WwdaQ1vDWck5goL-RRfSg374y_Y.jar
    Sep 25, 2021 12:44:57 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 7 seconds
    Sep 25, 2021 12:44:57 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 25, 2021 12:44:57 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash e856d800349e70babc4d947a539bc6e2d7bd2517520d0092fdd59d3481f2ec48> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-6FbYADSecLq8TZR6U5vG4te9JRdSDQCS_dWdNIHy7Eg.pb
    Sep 25, 2021 12:44:59 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 25, 2021 12:44:59 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 25, 2021 12:44:59 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 25, 2021 12:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 25, 2021 12:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-24_17_45_00-410929181615742579?project=apache-beam-testing
    Sep 25, 2021 12:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-24_17_45_00-410929181615742579
    Sep 25, 2021 12:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-24_17_45_00-410929181615742579
    Sep 25, 2021 12:45:04 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-25T00:45:03.579Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 25, 2021 12:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:45:34.207Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 25, 2021 12:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:45:35.577Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 25, 2021 12:45:36 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:45:36.306Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 25, 2021 12:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:45:36.355Z: Expanding GroupByKey operations into optimizable parts.
    Sep 25, 2021 12:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:45:36.390Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 25, 2021 12:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:45:36.488Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 25, 2021 12:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:45:36.514Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 25, 2021 12:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:45:36.550Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 25, 2021 12:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:45:36.936Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 25, 2021 12:45:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:45:37.010Z: Starting 5 workers in us-central1-c...
    Sep 25, 2021 12:46:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:46:19.593Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 25, 2021 12:46:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:46:45.910Z: Workers have started successfully.
    Sep 25, 2021 12:46:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:46:45.945Z: Workers have started successfully.
    Sep 25, 2021 12:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:47:18.633Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 25, 2021 12:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:47:18.793Z: Cleaning up.
    Sep 25, 2021 12:47:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:47:18.856Z: Stopping worker pool...
    Sep 25, 2021 12:49:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:49:29.735Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 25, 2021 12:49:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-25T00:49:29.777Z: Worker pool stopped.
    Sep 25, 2021 12:49:37 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-24_17_45_00-410929181615742579 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): cbe5e528-9c14-41a3-8dc1-48566506957a and timestamp: 2021-09-25T00:49:37.265000000Z:
                     Metric:                    Value:
                   read_time                     9.996
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 25, 2021 12:49:37 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 8 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.001 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.002 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 5 mins 2.928 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 20s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/d24zdbrmadpau

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2465

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2465/display/redirect>

Changes:


------------------------------------------
[...truncated 338.62 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c75df4ad0b4326aabbad29be2039d846
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 24, 2021 6:44:42 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 24, 2021 6:44:42 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 24, 2021 6:44:43 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 24, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 6:44:45 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 6:44:47 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 24, 2021 6:44:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 6:44:47 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 6:44:47 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 24, 2021 6:44:47 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 6:44:48 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 24, 2021 6:44:49 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 24, 2021 6:44:49 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 24, 2021 6:44:49 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 24, 2021 6:44:52 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 24, 2021 6:44:52 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-iWELRk0J0eqj9_ooQn6KLmU0Q20eazTA8el4stWEfXY.jar
    Sep 24, 2021 6:44:52 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1284230225333265737.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-1o2h1Ljn-MnvLwScjFCA2jm-rVn_-0DVX_ewaRdj_-w.jar
    Sep 24, 2021 6:44:52 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 24, 2021 6:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 24, 2021 6:44:53 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 7eb4f8696f3cb768b6dfb6962d89952410950cb4206f3c7ed25e1d11dd0417f2> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-frT4aW88t2i237aWLYmVJBCVDLQgbzx-0l4dEd0EF_I.pb
    Sep 24, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 24, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 24, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 24, 2021 6:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 24, 2021 6:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-24_11_44_55-2459813221828693126?project=apache-beam-testing
    Sep 24, 2021 6:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-24_11_44_55-2459813221828693126
    Sep 24, 2021 6:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-24_11_44_55-2459813221828693126
    Sep 24, 2021 6:44:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-24T18:44:58.962Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 24, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:06.108Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 24, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:06.973Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 24, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:07.017Z: Expanding GroupByKey operations into optimizable parts.
    Sep 24, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:07.060Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 24, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:07.129Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 24, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:07.165Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 24, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:07.196Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 24, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:07.613Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 24, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:07.711Z: Starting 5 workers in us-central1-c...
    Sep 24, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:18.286Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 24, 2021 6:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:42.476Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 24, 2021 6:45:43 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:42.517Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 24, 2021 6:45:54 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:45:52.722Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 24, 2021 6:46:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:46:20.972Z: Workers have started successfully.
    Sep 24, 2021 6:46:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:46:21.013Z: Workers have started successfully.
    Sep 24, 2021 6:46:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:46:55.603Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 24, 2021 6:46:55 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:46:55.788Z: Cleaning up.
    Sep 24, 2021 6:46:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:46:55.867Z: Stopping worker pool...
    Sep 24, 2021 6:49:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:49:16.481Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 24, 2021 6:49:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T18:49:16.544Z: Worker pool stopped.
    Sep 24, 2021 6:49:27 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-24_11_44_55-2459813221828693126 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 28209f62-5c8d-4897-a31f-500057983f68 and timestamp: 2021-09-24T18:49:27.331000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.461

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 24, 2021 6:49:27 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.008 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 2,5,main]) completed. Took 4 mins 49.522 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 8s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/jomwvy3bl3cyi

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2464

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2464/display/redirect>

Changes:


------------------------------------------
[...truncated 337.24 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c75df4ad0b4326aabbad29be2039d846
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 24, 2021 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 24, 2021 12:44:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 24, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 24, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 24, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 24, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1505334907]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 24, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 24, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 24, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 24, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 24, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 24, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 24, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 24, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 24, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 24, 2021 12:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 24, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 24, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-iWELRk0J0eqj9_ooQn6KLmU0Q20eazTA8el4stWEfXY.jar
    Sep 24, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8714511773009991827.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-6wFErkUUNZ4s4RVGNRiPabbe0XpmMP2R2HDyrdydQ2g.jar
    Sep 24, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 24, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 24, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103998 bytes, hash 4a59f11164f54caf0e7ddadfad8ed5a00395cc37863c94d8f4d06654dc0001fb> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-SlnxEWT1TK8OfdrfrY7VoAOVzDeGPJTY9NBmVNwAAfs.pb
    Sep 24, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 24, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 24, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 24, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 24, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-24_05_45_07-6918030974945565860?project=apache-beam-testing
    Sep 24, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-24_05_45_07-6918030974945565860
    Sep 24, 2021 12:45:10 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-24_05_45_07-6918030974945565860
    Sep 24, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-24T12:45:13.604Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 24, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:45:18.112Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 24, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:45:18.924Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 24, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:45:18.959Z: Expanding GroupByKey operations into optimizable parts.
    Sep 24, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:45:18.990Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 24, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:45:19.049Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 24, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:45:19.078Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 24, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:45:19.110Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 24, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:45:19.447Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 24, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:45:19.521Z: Starting 5 workers in us-central1-a...
    Sep 24, 2021 12:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:45:45.706Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 24, 2021 12:46:04 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:46:04.128Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 24, 2021 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:46:30.352Z: Workers have started successfully.
    Sep 24, 2021 12:46:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:46:30.389Z: Workers have started successfully.
    Sep 24, 2021 12:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:47:00.401Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 24, 2021 12:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:47:00.558Z: Cleaning up.
    Sep 24, 2021 12:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:47:00.625Z: Stopping worker pool...
    Sep 24, 2021 12:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:49:22.960Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 24, 2021 12:49:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T12:49:23.011Z: Worker pool stopped.
    Sep 24, 2021 12:49:29 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-24_05_45_07-6918030974945565860 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 84b2f5f2-d035-4207-ac57-446d4eb65e0b and timestamp: 2021-09-24T12:49:29.063000000Z:
                     Metric:                    Value:
                   read_time                     9.428
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 24, 2021 12:49:29 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 39.572 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 12s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/dbm4efp37xfj2

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2463

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2463/display/redirect>

Changes:


------------------------------------------
[...truncated 341.76 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c75df4ad0b4326aabbad29be2039d846
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 24, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 24, 2021 6:45:22 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 24, 2021 6:45:23 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 24, 2021 6:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 6:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 24, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 6:45:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 24, 2021 6:45:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1505334907]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 24, 2021 6:45:28 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 24, 2021 6:45:29 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 24, 2021 6:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 24, 2021 6:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 24, 2021 6:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-iWELRk0J0eqj9_ooQn6KLmU0Q20eazTA8el4stWEfXY.jar
    Sep 24, 2021 6:45:33 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2515785665617398108.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-u1pRGU47V9-INDq0nZ2fj7xYX9wjxftvRxS29b0WMc8.jar
    Sep 24, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 24, 2021 6:45:34 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 24, 2021 6:45:34 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103998 bytes, hash bd7906fb3c8587d81c9d0a6badaf4b6f7f301a59a2735bb43cb034bb7cf1baf3> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-vXkG-zyFh9gcnQprra9Lb38wGlmic1u0PLA0u3zxuvM.pb
    Sep 24, 2021 6:45:37 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 24, 2021 6:45:37 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 24, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 24, 2021 6:45:38 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 24, 2021 6:45:39 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-23_23_45_38-4999594876451740195?project=apache-beam-testing
    Sep 24, 2021 6:45:39 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-23_23_45_38-4999594876451740195
    Sep 24, 2021 6:45:39 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-23_23_45_38-4999594876451740195
    Sep 24, 2021 6:45:43 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-24T06:45:41.792Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 24, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:45:50.479Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 24, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:45:51.390Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 24, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:45:51.434Z: Expanding GroupByKey operations into optimizable parts.
    Sep 24, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:45:51.472Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 24, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:45:51.588Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 24, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:45:51.624Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 24, 2021 6:45:51 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:45:51.663Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 24, 2021 6:45:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:45:52.082Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 24, 2021 6:45:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:45:52.187Z: Starting 5 workers in us-central1-c...
    Sep 24, 2021 6:45:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:45:56.065Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 24, 2021 6:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:46:34.168Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 24, 2021 6:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:46:59.714Z: Workers have started successfully.
    Sep 24, 2021 6:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:46:59.740Z: Workers have started successfully.
    Sep 24, 2021 6:47:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:47:30.615Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 24, 2021 6:47:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:47:30.823Z: Cleaning up.
    Sep 24, 2021 6:47:31 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:47:30.918Z: Stopping worker pool...
    Sep 24, 2021 6:49:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:49:53.558Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 24, 2021 6:49:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T06:49:53.611Z: Worker pool stopped.
    Sep 24, 2021 6:49:59 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-23_23_45_38-4999594876451740195 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 9c9b8c78-6057-483d-bc43-3e94b030d40a and timestamp: 2021-09-24T06:49:59.724000000Z:
                     Metric:                    Value:
                   read_time                     9.254
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 24, 2021 6:50:00 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 42.017 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 40s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/wbq5hydbz3fqi

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2462

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2462/display/redirect?page=changes>

Changes:

[Brian Hulette] Add back 'more' break

[chuck.yang] Run BigQuery queries with batch priority

[chuck.yang] Allow changing query priority in ReadFromBigQuery

[chuck.yang] Update CHANGES

[alexander.chermenin] [BEAM-10822] Fixed typo in BigqueryClient

[zyichi] [BEAM-12898] Minor fix to Kubernetes.groovy postBuildScript.

[noreply] [BEAM-12869] Bump tensorflow from 2.5.0 to 2.5.1 in

[noreply] [BEAM-12946] Fix SmallestPerKey, add unit testing (#15567)

[zyichi] [BEAM-12908] Sickbay pubsublite.ReadWriteIT

[noreply] Relocate Go SDK breaking change note out of template into 2.33.0.


------------------------------------------
[...truncated 340.76 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker Thread 31,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker Thread 31,5,main]) started.
Gradle Test Executor 57 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is c75df4ad0b4326aabbad29be2039d846
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 57'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 57'
Successfully started process 'Gradle Test Executor 57'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 24, 2021 12:45:18 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 24, 2021 12:45:19 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 24, 2021 12:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 24, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 12:45:22 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 24, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 24, 2021 12:45:23 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1299075293]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 24, 2021 12:45:24 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 24, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 24, 2021 12:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 24, 2021 12:45:25 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 24, 2021 12:45:28 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 24, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-iWELRk0J0eqj9_ooQn6KLmU0Q20eazTA8el4stWEfXY.jar
    Sep 24, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3067383912730159324.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-RDIeCsZwHBmQy94ZoMBrlbsiVS69QscI-QVNIZ6ntMA.jar
    Sep 24, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 24, 2021 12:45:29 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 24, 2021 12:45:29 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 18c0661a19c3a6b5bfe0fa3d91a384419be310a1070c2e0344467c82697f0056> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-GMBmGhnDprW_4Po9kaOEQZvjEKEHDC4DREZ8gml_AFY.pb
    Sep 24, 2021 12:45:31 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 24, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 24, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 24, 2021 12:45:32 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 24, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-23_17_45_32-14184398797648112020?project=apache-beam-testing
    Sep 24, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-23_17_45_32-14184398797648112020
    Sep 24, 2021 12:45:33 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-23_17_45_32-14184398797648112020
    Sep 24, 2021 12:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-24T00:45:38.208Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 24, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:45:43.366Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 24, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:45:44.025Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 24, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:45:44.075Z: Expanding GroupByKey operations into optimizable parts.
    Sep 24, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:45:44.115Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 24, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:45:44.187Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 24, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:45:44.214Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 24, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:45:44.247Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 24, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:45:44.588Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 24, 2021 12:45:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:45:44.641Z: Starting 5 workers in us-central1-a...
    Sep 24, 2021 12:46:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:46:05.709Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 24, 2021 12:46:30 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:46:29.209Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 24, 2021 12:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:46:55.420Z: Workers have started successfully.
    Sep 24, 2021 12:46:56 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:46:55.456Z: Workers have started successfully.
    Sep 24, 2021 12:47:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:47:24.505Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 24, 2021 12:47:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:47:24.647Z: Cleaning up.
    Sep 24, 2021 12:47:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:47:24.721Z: Stopping worker pool...
    Sep 24, 2021 12:49:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:49:43.602Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 24, 2021 12:49:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-24T00:49:43.645Z: Worker pool stopped.
    Sep 24, 2021 12:49:50 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-23_17_45_32-14184398797648112020 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c9e709a3-21e4-4373-be6a-7d4e5ad078bd and timestamp: 2021-09-24T00:49:50.754000000Z:
                     Metric:                    Value:
                   read_time                       9.0
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 24, 2021 12:49:51 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 57 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.006 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.004 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker Thread 31,5,main]) completed. Took 4 mins 36.6 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 31s
152 actionable tasks: 100 executed, 52 from cache

Publishing build scan...
https://gradle.com/s/qcqjg3bvildw2

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2461

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2461/display/redirect?page=changes>

Changes:

[noreply] Avoid setting empty builders in proto setters

[Robert Bradshaw] Fix some website logos.

[kawaigin] Implicitly watch and track anonymous pipeline and PCollections

[chamikaramj] Reject requests when parameter names cannot be validated unless

[chamikaramj] Updates error message


------------------------------------------
[...truncated 352.53 KB...]
Gradle Test Executor 3 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 23, 2021 6:47:08 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 23, 2021 6:47:09 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 23, 2021 6:47:10 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 23, 2021 6:47:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 6:47:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 6:47:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 23, 2021 6:47:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 6:47:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 6:47:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 23, 2021 6:47:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1741599875]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 23, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 23, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 23, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 23, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 23, 2021 6:47:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1261942871]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 23, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 23, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 23, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 23, 2021 6:47:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 23, 2021 6:47:18 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 23, 2021 6:47:23 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 23, 2021 6:47:23 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-iWELRk0J0eqj9_ooQn6KLmU0Q20eazTA8el4stWEfXY.jar
    Sep 23, 2021 6:47:24 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-iWELRk0J0eqj9_ooQn6KLmU0Q20eazTA8el4stWEfXY.jar
    Sep 23, 2021 6:47:24 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2936684944477571990.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-CIXDPNEZW4V5N8CqNLzcgGSVUVSm7_dZ4ZyZ2qG_UU0.jar
    Sep 23, 2021 6:47:24 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/core-construction-java/build/libs/beam-runners-core-construction-java-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-runners-core-construction-java-2.34.0-SNAPSHOT-vZyLxaaCl52ich-R-UQheFzpjD42oKouyyEoIx03QC4.jar
    Sep 23, 2021 6:47:24 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/direct-java/build/libs/beam-runners-direct-java-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-runners-direct-java-2.34.0-SNAPSHOT-rHNNJt8fKjoZ7FjgFCzASOqHR3-JCxYQBb_7vqA7ZtE.jar
    Sep 23, 2021 6:47:26 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 246 files cached, 4 files newly uploaded in 2 seconds
    Sep 23, 2021 6:47:26 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 23, 2021 6:47:26 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 027ad43ceaa0e50b63f22db5cc5fa88a58b4726aebbc49592b8a841efc2d83ed> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-AnrUPOqg5Qtj8i21zF-oili0cmrrvElZK4qEHvwtg-0.pb
    Sep 23, 2021 6:47:28 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 23, 2021 6:47:29 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 23, 2021 6:47:29 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 23, 2021 6:47:29 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 23, 2021 6:47:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-23_11_47_29-5175217799069690722?project=apache-beam-testing
    Sep 23, 2021 6:47:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-23_11_47_29-5175217799069690722
    Sep 23, 2021 6:47:30 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-23_11_47_29-5175217799069690722
    Sep 23, 2021 6:47:34 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-23T18:47:32.940Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 23, 2021 6:47:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:47:39.547Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 23, 2021 6:47:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:47:40.285Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 23, 2021 6:47:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:47:40.326Z: Expanding GroupByKey operations into optimizable parts.
    Sep 23, 2021 6:47:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:47:40.376Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 23, 2021 6:47:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:47:40.458Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 23, 2021 6:47:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:47:40.488Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 23, 2021 6:47:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:47:40.513Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 23, 2021 6:47:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:47:40.855Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 23, 2021 6:47:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:47:40.933Z: Starting 5 workers in us-central1-a...
    Sep 23, 2021 6:47:59 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:47:57.740Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 23, 2021 6:48:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:48:31.167Z: Autoscaling: Raised the number of workers to 4 based on the rate of progress in the currently running stage(s).
    Sep 23, 2021 6:48:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:48:31.192Z: Resized worker pool to 4, though goal was 5.  This could be a quota issue.
    Sep 23, 2021 6:48:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:48:41.526Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 23, 2021 6:49:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:49:03.176Z: Workers have started successfully.
    Sep 23, 2021 6:49:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:49:03.228Z: Workers have started successfully.
    Sep 23, 2021 6:49:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:49:31.385Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 23, 2021 6:49:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:49:31.526Z: Cleaning up.
    Sep 23, 2021 6:49:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:49:31.621Z: Stopping worker pool...
    Sep 23, 2021 6:51:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:51:46.747Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 23, 2021 6:51:47 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T18:51:46.792Z: Worker pool stopped.
    Sep 23, 2021 6:51:51 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-23_11_47_29-5175217799069690722 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 4d760c17-9dbb-4873-8634-904f58e0e733 and timestamp: 2021-09-23T18:51:52.056000000Z:
                     Metric:                    Value:
                   read_time                     8.396
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 23, 2021 6:51:53 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 3 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.048 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.053 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 52.801 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 7m 19s
152 actionable tasks: 103 executed, 49 from cache

Publishing build scan...
https://gradle.com/s/2edlqbjwfog4y

Stopped 2 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2460

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2460/display/redirect>

Changes:


------------------------------------------
[...truncated 341.47 KB...]

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 23, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 23, 2021 12:45:02 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 23, 2021 12:45:03 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 23, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 23, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 23, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1505334907]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@40782033]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 12:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 23, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 23, 2021 12:45:09 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 23, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 23, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 23, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 23, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test3945536414481175957.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-upxuCbr27peujoVKE7chutrtsZa5e-BThvm_dK1TBRU.jar
    Sep 23, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.34.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.34.0-SNAPSHOT-tests-I-osQrG-lSEES6RD4ibuovIeVl-_U3nOEEsBLo1rYrA.jar
    Sep 23, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/thrift/build/libs/beam-sdks-java-io-thrift-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-thrift-2.34.0-SNAPSHOT-4bF8QpaxkWlkdPssX5OV0S9rIzFuRU0pojrN2fBwjjk.jar
    Sep 23, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/join-library/build/libs/beam-sdks-java-extensions-join-library-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-join-library-2.34.0-SNAPSHOT-xz-aSlhizJi8jigGo3amZ2g_MkglE3dBulicsi2UX5U.jar
    Sep 23, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/udf/build/libs/beam-sdks-java-extensions-sql-udf-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-udf-2.34.0-SNAPSHOT-MlQQAS1Je1cH4lXtIOKM3zL6o97S5F7sJQ3zUailw08.jar
    Sep 23, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 245 files cached, 5 files newly uploaded in 1 seconds
    Sep 23, 2021 12:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 23, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104000 bytes, hash af9781628e59b27097793bc43892cf6bf3bc88186592c229304c113d5f985145> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-r5eBYo5ZsnCXeTvEOJLPa_O8iBhlksIpMEwRPV-YUUU.pb
    Sep 23, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 23, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 23, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 23, 2021 12:45:17 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 23, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-23_05_45_17-9181264945269481986?project=apache-beam-testing
    Sep 23, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-23_05_45_17-9181264945269481986
    Sep 23, 2021 12:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-23_05_45_17-9181264945269481986
    Sep 23, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-23T12:45:24.072Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 23, 2021 12:45:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:45:37.032Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 23, 2021 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:45:37.680Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 23, 2021 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:45:37.723Z: Expanding GroupByKey operations into optimizable parts.
    Sep 23, 2021 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:45:37.753Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 23, 2021 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:45:37.830Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 23, 2021 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:45:37.866Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 23, 2021 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:45:37.897Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 23, 2021 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:45:38.253Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 23, 2021 12:45:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:45:38.342Z: Starting 5 workers in us-central1-c...
    Sep 23, 2021 12:46:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:46:03.313Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 23, 2021 12:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:46:12.594Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 23, 2021 12:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:46:12.624Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 23, 2021 12:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:46:22.906Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 23, 2021 12:46:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:46:48.198Z: Workers have started successfully.
    Sep 23, 2021 12:46:49 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:46:48.235Z: Workers have started successfully.
    Sep 23, 2021 12:47:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:47:19.094Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 23, 2021 12:47:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:47:19.247Z: Cleaning up.
    Sep 23, 2021 12:47:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:47:19.335Z: Stopping worker pool...
    Sep 23, 2021 12:49:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:49:50.498Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 23, 2021 12:49:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T12:49:50.548Z: Worker pool stopped.
    Sep 23, 2021 12:49:58 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-23_05_45_17-9181264945269481986 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 6df0a03d-f87a-439f-a8e9-2a9cc96fc940 and timestamp: 2021-09-23T12:49:58.438000000Z:
                     Metric:                    Value:
                   read_time                     8.838
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 23, 2021 12:49:58 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.067 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 5 mins 1.666 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 34s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/b6ar7voroa7cm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2459

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2459/display/redirect?page=changes>

Changes:

[noreply] [BEAM-12383] Adding Go SDK and Kafka IO to Gradle cross-language test


------------------------------------------
[...truncated 340.97 KB...]
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is ef521024ff2c41cc9caf5e1ede2e99a5
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 23, 2021 6:45:19 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 23, 2021 6:45:20 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 23, 2021 6:45:21 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 23, 2021 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 6:45:24 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 6:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 23, 2021 6:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 6:45:25 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 6:45:25 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 23, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@945711342]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 23, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 23, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 23, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 23, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 23, 2021 6:45:26 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1394176916]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 23, 2021 6:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 6:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 6:45:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 23, 2021 6:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 6:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 6:45:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 23, 2021 6:45:27 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 23, 2021 6:45:27 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 23, 2021 6:45:27 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 23, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 23, 2021 6:45:30 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 23, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5910955340003637270.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-g4Rx_5rhi-UXfSpycV8e91lWcnG12qQ71zBea2Ze0mI.jar
    Sep 23, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.mongodb/mongo-java-driver/3.12.7/1f45c6a397feeb46da75425619333d1cc6f90f78/mongo-java-driver-3.12.7.jar to gs://temp-storage-for-perf-tests/loadtests/staging/mongo-java-driver-3.12.7-D_zgBJWNb9mzmuetJ37a0X9XtpcfSGsXYpxe6eE8Tao.jar
    Sep 23, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Sep 23, 2021 6:45:31 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 23, 2021 6:45:31 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash b669c72af7daf9ece45c5f55b2c8232ff9275730054fa0808ad3cd9b2e0c2d90> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-tmnHKvfa-ezkXF9VssgjL_knVzAFT6CAitPNmy4MLZA.pb
    Sep 23, 2021 6:45:34 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 23, 2021 6:45:34 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 23, 2021 6:45:34 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 23, 2021 6:45:34 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 23, 2021 6:45:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-22_23_45_34-17970570427316720099?project=apache-beam-testing
    Sep 23, 2021 6:45:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-22_23_45_34-17970570427316720099
    Sep 23, 2021 6:45:35 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-22_23_45_34-17970570427316720099
    Sep 23, 2021 6:45:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-23T06:45:38.771Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 23, 2021 6:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:45:45.510Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 23, 2021 6:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:45:46.202Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 23, 2021 6:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:45:46.231Z: Expanding GroupByKey operations into optimizable parts.
    Sep 23, 2021 6:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:45:46.261Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 23, 2021 6:45:46 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:45:46.326Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 23, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:45:46.357Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 23, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:45:46.390Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 23, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:45:46.751Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 23, 2021 6:45:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:45:46.819Z: Starting 5 workers in us-central1-a...
    Sep 23, 2021 6:45:55 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:45:53.710Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 23, 2021 6:46:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:46:32.942Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 23, 2021 6:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:46:58.443Z: Workers have started successfully.
    Sep 23, 2021 6:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:46:58.480Z: Workers have started successfully.
    Sep 23, 2021 6:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:47:27.001Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 23, 2021 6:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:47:27.125Z: Cleaning up.
    Sep 23, 2021 6:47:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:47:27.190Z: Stopping worker pool...
    Sep 23, 2021 6:49:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:49:46.637Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 23, 2021 6:49:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T06:49:46.707Z: Worker pool stopped.
    Sep 23, 2021 6:49:52 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-22_23_45_34-17970570427316720099 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 37e52fc6-fcec-47eb-95ae-a6d4e0659e53 and timestamp: 2021-09-23T06:49:52.471000000Z:
                     Metric:                    Value:
                   read_time                     9.162
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 23, 2021 6:49:52 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 37.547 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 33s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/ewao26yabe3tm

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2458

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2458/display/redirect?page=changes>

Changes:

[kileysok] Add MapState and SetState support

[Kyle Weaver] [BEAM-12898] Use new postbuildscript dsl in Flink load tests

[Kyle Weaver] run postbuild regardless of test result

[Kyle Weaver] spotless

[noreply] Merge pull request #15487 from [BEAM-12812] - Run Github Actions on GCP

[noreply] [BEAM-11097] Add SideInputCache to harness control type (#15530)

[kawaigin] Updated interactive integration test golden screenshots.

[noreply] Revert PR 15487 (BEAM-12812) (#15554)

[zyichi] [BEAM-12898] Trying out solution suggestion for JENKINS-66189 to solve


------------------------------------------
[...truncated 340.82 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 52ebef38e18ea6b3c79e97ca10aec7b1
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 23, 2021 12:46:56 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 23, 2021 12:46:57 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 23, 2021 12:46:57 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 23, 2021 12:47:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 12:47:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 12:47:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 23, 2021 12:47:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 12:47:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 12:47:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 23, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@945711342]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 23, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 23, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 23, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 23, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 23, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1394176916]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 23, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 12:47:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 12:47:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 23, 2021 12:47:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 23, 2021 12:47:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 23, 2021 12:47:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 23, 2021 12:47:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 23, 2021 12:47:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 23, 2021 12:47:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 23, 2021 12:47:06 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 23, 2021 12:47:06 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 23, 2021 12:47:06 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5184320067722493232.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-az2qEX0NxQwDhCTXI2k7ebhcmbbYnVpTPUkWAcHylm4.jar
    Sep 23, 2021 12:47:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.google.cloud/google-cloud-bigtable-emulator/0.137.1/b1639aa134de1302e43d9e9c3843f6ff853c510f/google-cloud-bigtable-emulator-0.137.1.jar to gs://temp-storage-for-perf-tests/loadtests/staging/google-cloud-bigtable-emulator-0.137.1-V2SlLU4BjIGf5QoF7YL-A-QNjw49NdXrNM4QoX9MXSI.jar
    Sep 23, 2021 12:47:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 2 seconds
    Sep 23, 2021 12:47:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 23, 2021 12:47:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103999 bytes, hash 11b94f5423a572a9f59f1157c15775ecabb636195aa71bd2dc2e72adbf45c43b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-EblPVCOlcqn1nxFXwVd17Ku2NhlapxvS3C5yrb9FxDs.pb
    Sep 23, 2021 12:47:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 23, 2021 12:47:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 23, 2021 12:47:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 23, 2021 12:47:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 23, 2021 12:47:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-22_17_47_11-3244588918368151576?project=apache-beam-testing
    Sep 23, 2021 12:47:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-22_17_47_11-3244588918368151576
    Sep 23, 2021 12:47:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-22_17_47_11-3244588918368151576
    Sep 23, 2021 12:47:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-23T00:47:15.522Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 23, 2021 12:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:47:23.824Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 23, 2021 12:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:47:24.603Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 23, 2021 12:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:47:24.642Z: Expanding GroupByKey operations into optimizable parts.
    Sep 23, 2021 12:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:47:24.672Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 23, 2021 12:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:47:24.767Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 23, 2021 12:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:47:24.806Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 23, 2021 12:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:47:24.845Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 23, 2021 12:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:47:25.323Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 23, 2021 12:47:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:47:25.411Z: Starting 5 workers in us-central1-c...
    Sep 23, 2021 12:47:49 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:47:48.955Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 23, 2021 12:48:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:48:01.559Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 23, 2021 12:48:02 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:48:01.614Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 23, 2021 12:48:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:48:11.900Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 23, 2021 12:48:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:48:38.117Z: Workers have started successfully.
    Sep 23, 2021 12:48:38 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:48:38.151Z: Workers have started successfully.
    Sep 23, 2021 12:49:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:49:09.689Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 23, 2021 12:49:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:49:09.869Z: Cleaning up.
    Sep 23, 2021 12:49:11 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:49:09.963Z: Stopping worker pool...
    Sep 23, 2021 12:51:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:51:37.848Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 23, 2021 12:51:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-23T00:51:37.925Z: Worker pool stopped.
    Sep 23, 2021 12:51:45 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-22_17_47_11-3244588918368151576 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 0adb44ea-5b14-4349-8bae-16d579796dd5 and timestamp: 2021-09-23T00:51:45.159000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.401

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 23, 2021 12:51:45 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 53.002 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 50s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/gidlazahzrocc

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2457

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2457/display/redirect>

Changes:


------------------------------------------
[...truncated 338.67 KB...]
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 184d1d1e4a85b12c1e9374eee179eec0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 22, 2021 6:44:57 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 22, 2021 6:44:58 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 22, 2021 6:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 22, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 6:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 22, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 6:45:03 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 22, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@784892670]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 22, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 22, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 22, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 22, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 22, 2021 6:45:04 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1151017424]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 22, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 22, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 22, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 22, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 22, 2021 6:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 22, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 22, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 22, 2021 6:45:08 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1244217088328882518.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-gaHEQFRtP_RhA-CnXy22XXTvOIgKB8QuwHJmhQY5QcM.jar
    Sep 22, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 22, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 22, 2021 6:45:09 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash b914cb50a0d09d4ec672e60bd0efed8060794a88f55dcb567218c7faff2d154b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-uRTLUKDQnU7GcuYL0O_tgGB5Soj1XctWchjH-v8tFUs.pb
    Sep 22, 2021 6:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 22, 2021 6:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 22, 2021 6:45:11 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 22, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 22, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-22_11_45_12-13103536790849586117?project=apache-beam-testing
    Sep 22, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-22_11_45_12-13103536790849586117
    Sep 22, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-22_11_45_12-13103536790849586117
    Sep 22, 2021 6:45:16 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-22T18:45:15.394Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 22, 2021 6:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:45:23.116Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 22, 2021 6:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:45:23.700Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 22, 2021 6:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:45:23.744Z: Expanding GroupByKey operations into optimizable parts.
    Sep 22, 2021 6:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:45:23.782Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 22, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:45:23.874Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 22, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:45:23.914Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 22, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:45:23.948Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 22, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:45:24.342Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 22, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:45:24.427Z: Starting 5 workers in us-central1-c...
    Sep 22, 2021 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:45:36.599Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 22, 2021 6:46:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:45:59.901Z: Autoscaling: Raised the number of workers to 3 based on the rate of progress in the currently running stage(s).
    Sep 22, 2021 6:46:02 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:45:59.954Z: Resized worker pool to 3, though goal was 5.  This could be a quota issue.
    Sep 22, 2021 6:46:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:46:10.241Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 22, 2021 6:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:46:35.293Z: Workers have started successfully.
    Sep 22, 2021 6:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:46:35.318Z: Workers have started successfully.
    Sep 22, 2021 6:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:47:07.293Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 22, 2021 6:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:47:07.461Z: Cleaning up.
    Sep 22, 2021 6:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:47:07.542Z: Stopping worker pool...
    Sep 22, 2021 6:49:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:49:30.694Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 22, 2021 6:49:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T18:49:30.743Z: Worker pool stopped.
    Sep 22, 2021 6:49:37 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-22_11_45_12-13103536790849586117 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 2a3475f2-b7be-478c-ab0d-6c180f4e11af and timestamp: 2021-09-22T18:49:37.336000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.199

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 22, 2021 6:49:37 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.348 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 45.159 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 18s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/a4t4wz5trhy74

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2456

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2456/display/redirect?page=changes>

Changes:

[danthev] Fix 2.32.0 release notes.


------------------------------------------
[...truncated 337.21 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 184d1d1e4a85b12c1e9374eee179eec0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 22, 2021 12:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 22, 2021 12:44:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 22, 2021 12:44:55 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 22, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 22, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 22, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@945711342]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1394176916]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 22, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 22, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 22, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 22, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 22, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 22, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1080326950153223984.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-E0JAGKSgqH47A6_9SBeNMJe8OWoJ0gbHg_d6TMB2vVw.jar
    Sep 22, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 22, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 22, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 73213c3a9e936855b3bb3a48c7e5adb4a07f9b322108c820f91b481ec04af87f> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-cyE8Op6TaFWzuzpIx-WttKB_mzIhCMgg-RtIHsBK-H8.pb
    Sep 22, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 22, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 22, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 22, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 22, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-22_05_45_09-15371512024525983311?project=apache-beam-testing
    Sep 22, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-22_05_45_09-15371512024525983311
    Sep 22, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-22_05_45_09-15371512024525983311
    Sep 22, 2021 12:45:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-22T12:45:12.484Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 22, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:45:20.382Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 22, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:45:21.185Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 22, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:45:21.234Z: Expanding GroupByKey operations into optimizable parts.
    Sep 22, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:45:21.269Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 22, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:45:21.352Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 22, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:45:21.391Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 22, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:45:21.422Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 22, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:45:21.880Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 22, 2021 12:45:22 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:45:21.957Z: Starting 5 workers in us-central1-c...
    Sep 22, 2021 12:45:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:45:50.153Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 22, 2021 12:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:46:11.003Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 22, 2021 12:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:46:36.335Z: Workers have started successfully.
    Sep 22, 2021 12:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:46:36.366Z: Workers have started successfully.
    Sep 22, 2021 12:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:47:07.601Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 22, 2021 12:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:47:07.759Z: Cleaning up.
    Sep 22, 2021 12:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:47:07.848Z: Stopping worker pool...
    Sep 22, 2021 12:49:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:49:29.844Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 22, 2021 12:49:30 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T12:49:29.906Z: Worker pool stopped.
    Sep 22, 2021 12:49:35 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-22_05_45_09-15371512024525983311 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 36c0a812-054a-401b-bf6a-0b82689eea44 and timestamp: 2021-09-22T12:49:35.225000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.325

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 22, 2021 12:49:35 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.036 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 45.649 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 17s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/uaklsmfn5dgek

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2455

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2455/display/redirect?page=changes>

Changes:

[noreply] Merge pull request #15537 from [BEAM-12908] Add a sleep to the IT after


------------------------------------------
[...truncated 337.68 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 184d1d1e4a85b12c1e9374eee179eec0
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 22, 2021 6:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 22, 2021 6:44:54 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 22, 2021 6:44:55 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 22, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 6:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 22, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 22, 2021 6:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@784892670]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 22, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 22, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 22, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 22, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 22, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1151017424]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 22, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 6:45:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 22, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 22, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 22, 2021 6:45:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 22, 2021 6:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 22, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 22, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 22, 2021 6:45:04 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test171101041235627558.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-NU1LvmrDW5YH4cBRSQX_zfD2xwsH9tyX1bR9_ahgIkI.jar
    Sep 22, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 22, 2021 6:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 22, 2021 6:45:05 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash e6eb149ebfe9b6e0adf5512916244aa35a3f7f59620b42cbcc6dd0a5874c9e02> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-5usUnr_ptuCt9VEpFiRKo1o_f1liC0LLzG3QpYdMngI.pb
    Sep 22, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 22, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 22, 2021 6:45:07 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 22, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 22, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-21_23_45_08-831295087074972985?project=apache-beam-testing
    Sep 22, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-21_23_45_08-831295087074972985
    Sep 22, 2021 6:45:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-21_23_45_08-831295087074972985
    Sep 22, 2021 6:45:13 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-22T06:45:11.641Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 22, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:45:17.337Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 22, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:45:18.129Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 22, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:45:18.166Z: Expanding GroupByKey operations into optimizable parts.
    Sep 22, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:45:18.191Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 22, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:45:18.257Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 22, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:45:18.294Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 22, 2021 6:45:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:45:18.323Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 22, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:45:18.730Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 22, 2021 6:45:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:45:18.808Z: Starting 5 workers in us-central1-a...
    Sep 22, 2021 6:45:50 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:45:49.475Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 22, 2021 6:46:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:46:03.070Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 22, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:46:29.367Z: Workers have started successfully.
    Sep 22, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:46:29.427Z: Workers have started successfully.
    Sep 22, 2021 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:46:58.337Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 22, 2021 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:46:58.483Z: Cleaning up.
    Sep 22, 2021 6:46:59 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:46:58.566Z: Stopping worker pool...
    Sep 22, 2021 6:49:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:49:23.097Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 22, 2021 6:49:25 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T06:49:23.169Z: Worker pool stopped.
    Sep 22, 2021 6:49:30 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-21_23_45_08-831295087074972985 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 7ee577c3-a708-4b2f-8b17-aca66c653bcc and timestamp: 2021-09-22T06:49:30.654000000Z:
                     Metric:                    Value:
                   read_time                     8.108
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 22, 2021 6:49:31 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.034 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 10,5,main]) completed. Took 4 mins 41.272 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 13s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/iwf5unyl6uklk

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2454

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2454/display/redirect?page=changes>

Changes:

[kawaigin] [BEAM-10708] Added an example notebook for beam_sql magic

[noreply] Add a timeout for BQ streaming_insert RPCS (#15541)


------------------------------------------
[...truncated 337.32 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is af5aef652cb0ffd2e4d4d1a71b16ad28
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 22, 2021 12:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 22, 2021 12:44:59 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 22, 2021 12:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 22, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 12:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 22, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 12:45:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 22, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@784892670]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 22, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 22, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 22, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 22, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 12:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 22, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1151017424]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 22, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 22, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 22, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 22, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 22, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 22, 2021 12:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 22, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 22, 2021 12:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 22, 2021 12:45:08 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 22, 2021 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4274549280785273227.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-d9SpcQgcM0lltFLrjsbMZhYEmwb_zY2Ng-lF5yftMSc.jar
    Sep 22, 2021 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 22, 2021 12:45:09 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 22, 2021 12:45:09 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 688b941c99458e9229a06f828b9731d51d518db03490136639a5a3539100c897> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-aIuUHJlFjpIpoG-Ci5cx1R1RjbA0kBNmOaWjU5EAyJc.pb
    Sep 22, 2021 12:45:11 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 22, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 22, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 22, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 22, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-21_17_45_12-18409918559903482452?project=apache-beam-testing
    Sep 22, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-21_17_45_12-18409918559903482452
    Sep 22, 2021 12:45:12 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-21_17_45_12-18409918559903482452
    Sep 22, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-22T00:45:15.552Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 22, 2021 12:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:45:21.291Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 22, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:45:22.034Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 22, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:45:22.067Z: Expanding GroupByKey operations into optimizable parts.
    Sep 22, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:45:22.126Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 22, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:45:22.201Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 22, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:45:22.230Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 22, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:45:22.260Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 22, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:45:22.650Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 22, 2021 12:45:23 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:45:22.722Z: Starting 5 workers in us-central1-a...
    Sep 22, 2021 12:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:45:32.085Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 22, 2021 12:46:01 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:46:01.335Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 22, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:46:34.996Z: Workers have started successfully.
    Sep 22, 2021 12:46:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:46:35.020Z: Workers have started successfully.
    Sep 22, 2021 12:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:47:04.530Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 22, 2021 12:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:47:04.660Z: Cleaning up.
    Sep 22, 2021 12:47:06 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:47:04.742Z: Stopping worker pool...
    Sep 22, 2021 12:49:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:49:28.088Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 22, 2021 12:49:28 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-22T00:49:28.131Z: Worker pool stopped.
    Sep 22, 2021 12:49:35 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-21_17_45_12-18409918559903482452 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): daeb5ee0-018d-43be-97f3-60ac719aa1a7 and timestamp: 2021-09-22T00:49:35.590000000Z:
                     Metric:                    Value:
                   read_time                     8.915
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 22, 2021 12:49:36 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.033 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 9,5,main]) completed. Took 4 mins 41.029 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 14s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/4grmsoriyf5so

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2453

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2453/display/redirect?page=changes>

Changes:

[noreply] [BEAM-12919] Removed IBM Streams from runner matrix (#15542)

[noreply] [BEAM-12258] Re-throw exception from forked thread in


------------------------------------------
[...truncated 340.65 KB...]
> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is af5aef652cb0ffd2e4d4d1a71b16ad28
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 21, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 21, 2021 6:45:08 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 21, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 21, 2021 6:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 6:45:11 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 6:45:12 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 21, 2021 6:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 6:45:12 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 6:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 21, 2021 6:45:13 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@945711342]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1394176916]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 21, 2021 6:45:14 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 21, 2021 6:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 21, 2021 6:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 21, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 21, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 21, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/build/libs/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-extensions-sql-2.34.0-SNAPSHOT-tests-5kU7Hyrl28H7hvdoTVd39uKp33WIlhc8ZD64bUU4wzA.jar
    Sep 21, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8522678397766649911.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-nzjuqmCa_lTUZ040_QVeqJtii2ZKtrI20iTcL37msQo.jar
    Sep 21, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Sep 21, 2021 6:45:19 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 21, 2021 6:45:19 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103998 bytes, hash abdf42e26da4175cb906cbce369ef9635e6de55d2821827503a4577c62fe3847> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-q99C4m2kF1y5BsvONp75Y15t5V0oIYJ1A6RXfGL-OEc.pb
    Sep 21, 2021 6:45:21 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 21, 2021 6:45:21 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 21, 2021 6:45:21 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 21, 2021 6:45:21 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 21, 2021 6:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-21_11_45_22-705998211072244175?project=apache-beam-testing
    Sep 21, 2021 6:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-21_11_45_22-705998211072244175
    Sep 21, 2021 6:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-21_11_45_22-705998211072244175
    Sep 21, 2021 6:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-21T18:45:25.500Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 21, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:45:31.316Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 21, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:45:31.975Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 21, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:45:32.015Z: Expanding GroupByKey operations into optimizable parts.
    Sep 21, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:45:32.045Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 21, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:45:32.126Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 21, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:45:32.158Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 21, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:45:32.183Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 21, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:45:32.517Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 21, 2021 6:45:33 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:45:32.590Z: Starting 5 workers in us-central1-a...
    Sep 21, 2021 6:46:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:46:00.782Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 21, 2021 6:46:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:46:00.808Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 21, 2021 6:46:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:46:06.367Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 21, 2021 6:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:46:11.035Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 21, 2021 6:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:46:36.825Z: Workers have started successfully.
    Sep 21, 2021 6:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:46:36.860Z: Workers have started successfully.
    Sep 21, 2021 6:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:47:04.617Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 21, 2021 6:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:47:04.790Z: Cleaning up.
    Sep 21, 2021 6:47:06 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:47:04.862Z: Stopping worker pool...
    Sep 21, 2021 6:49:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:49:41.520Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 21, 2021 6:49:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T18:49:41.562Z: Worker pool stopped.
    Sep 21, 2021 6:49:47 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-21_11_45_22-705998211072244175 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d2e5245a-460d-4f4e-94b8-a110f52be334 and timestamp: 2021-09-21T18:49:47.959000000Z:
                     Metric:                    Value:
                   read_time                     7.208
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 21, 2021 6:49:48 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 44.309 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 31s
152 actionable tasks: 97 executed, 55 from cache

Publishing build scan...
https://gradle.com/s/22wry22hcx2ji

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2452

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2452/display/redirect>

Changes:


------------------------------------------
[...truncated 336.56 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is e50055417125b31f26e1108fef7d1958
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 21, 2021 12:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 21, 2021 12:44:55 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 21, 2021 12:44:56 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 21, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 21, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 21, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@945711342]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 21, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 21, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 21, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 21, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 21, 2021 12:45:01 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1394176916]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 21, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 21, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 21, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 21, 2021 12:45:02 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 21, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 21, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 21, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 21, 2021 12:45:05 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test9199631328302929270.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-qtJQ1nU_LVwc-bMkv5pcCvxZv0ZAzYRLUG99Adr53Bo.jar
    Sep 21, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 21, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 21, 2021 12:45:06 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash e34f7296faccfaf7745513c9c8f6c117f387ad5a10c8133a3c858ea05cb84898> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-409ylvrM-vd0VRPJyPbBF_OHrVoQyBM6PIWOoFy4SJg.pb
    Sep 21, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 21, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 21, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 21, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 21, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-21_05_45_09-4200941418815551979?project=apache-beam-testing
    Sep 21, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-21_05_45_09-4200941418815551979
    Sep 21, 2021 12:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-21_05_45_09-4200941418815551979
    Sep 21, 2021 12:45:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-21T12:45:12.560Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 21, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:45:18.858Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 21, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:45:19.664Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 21, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:45:19.698Z: Expanding GroupByKey operations into optimizable parts.
    Sep 21, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:45:19.734Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 21, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:45:19.830Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 21, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:45:19.869Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 21, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:45:19.903Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 21, 2021 12:45:20 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:45:20.279Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 21, 2021 12:45:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:45:20.361Z: Starting 5 workers in us-central1-c...
    Sep 21, 2021 12:45:35 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:45:33.390Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 21, 2021 12:46:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:46:07.964Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 21, 2021 12:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:46:35.443Z: Workers have started successfully.
    Sep 21, 2021 12:46:37 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:46:35.474Z: Workers have started successfully.
    Sep 21, 2021 12:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:47:06.566Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 21, 2021 12:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:47:06.787Z: Cleaning up.
    Sep 21, 2021 12:47:07 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:47:06.917Z: Stopping worker pool...
    Sep 21, 2021 12:50:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:50:05.289Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 21, 2021 12:50:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T12:50:05.330Z: Worker pool stopped.
    Sep 21, 2021 12:50:11 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-21_05_45_09-4200941418815551979 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 93904afc-5048-4ad6-8983-056c3298e888 and timestamp: 2021-09-21T12:50:11.911000000Z:
                     Metric:                    Value:
                   read_time                     9.809
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 21, 2021 12:50:12 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 5 mins 21.387 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 53s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/mt2hzwi53hric

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2451

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2451/display/redirect?page=changes>

Changes:

[chamikaramj] Python support for directly using Java transforms using constructor and

[chamikaramj] Fixes yapf

[chamikaramj] Fixes lint

[chamikaramj] Addressing reviewer comments

[chamikaramj] Adds support for a field name format that will be ignored at expansion

[chamikaramj] Addresses reviewer comments

[chamikaramj] Use correct ignore field prefix in Python side


------------------------------------------
[...truncated 339.99 KB...]
> Task :sdks:java:extensions:sql:perf-tests:testClasses
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) started.
Gradle Test Executor 9 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is e50055417125b31f26e1108fef7d1958
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 9'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 9'
Successfully started process 'Gradle Test Executor 9'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 21, 2021 6:45:57 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 21, 2021 6:45:58 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 21, 2021 6:45:58 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 21, 2021 6:46:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 6:46:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 21, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 21, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@784892670]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1151017424]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 6:46:03 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 21, 2021 6:46:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 21, 2021 6:46:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 21, 2021 6:46:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 21, 2021 6:46:07 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 21, 2021 6:46:07 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 21, 2021 6:46:07 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test8512241474882420204.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-N3T6yMXWd4vU9P2BPRbMJm0QgxBhdlJ15cxIuaPmGqQ.jar
    Sep 21, 2021 6:46:08 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 21, 2021 6:46:08 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 21, 2021 6:46:08 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103996 bytes, hash c124355bc851def2d9176381e34ffe6d6e29ff51e60486af60a828cc892ea38d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-wSQ1W8hR3vLZF2OB40_-bW4p_1HmBIavYKgozIkuo40.pb
    Sep 21, 2021 6:46:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 21, 2021 6:46:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 21, 2021 6:46:10 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 21, 2021 6:46:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 21, 2021 6:46:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-20_23_46_10-10554845187517000315?project=apache-beam-testing
    Sep 21, 2021 6:46:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-20_23_46_10-10554845187517000315
    Sep 21, 2021 6:46:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-20_23_46_10-10554845187517000315
    Sep 21, 2021 6:46:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-21T06:46:16.062Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 21, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:46:21.626Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 21, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:46:22.494Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 21, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:46:22.531Z: Expanding GroupByKey operations into optimizable parts.
    Sep 21, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:46:22.558Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 21, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:46:22.643Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 21, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:46:22.691Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 21, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:46:22.714Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 21, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:46:23.040Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 21, 2021 6:46:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:46:23.102Z: Starting 5 workers in us-central1-a...
    Sep 21, 2021 6:46:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:46:42.938Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 21, 2021 6:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:47:06.667Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 21, 2021 6:47:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:47:39.878Z: Workers have started successfully.
    Sep 21, 2021 6:47:41 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:47:39.913Z: Workers have started successfully.
    Sep 21, 2021 6:48:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:48:06.920Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 21, 2021 6:48:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:48:07.036Z: Cleaning up.
    Sep 21, 2021 6:48:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:48:07.099Z: Stopping worker pool...
    Sep 21, 2021 6:50:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:50:23.433Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 21, 2021 6:50:24 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T06:50:23.474Z: Worker pool stopped.
    Sep 21, 2021 6:50:28 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-20_23_46_10-10554845187517000315 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 1e37e075-1024-4a77-ab3e-6166dc87494d and timestamp: 2021-09-21T06:50:28.994000000Z:
                     Metric:                    Value:
                   read_time                     8.339
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 21, 2021 6:50:29 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 9 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.005 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.003 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 5,5,main]) completed. Took 4 mins 35.969 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 8s
152 actionable tasks: 99 executed, 53 from cache

Publishing build scan...
https://gradle.com/s/ffn5mowaadkhq

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2450

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2450/display/redirect?page=changes>

Changes:

[Robert Bradshaw] Additional CoGBK tests.

[noreply] [BEAM-12803] Update deprecated use of _field_types (#15539)

[Robert Bradshaw] Move CoGBK tests into appropreate module.


------------------------------------------
[...truncated 338.38 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 4dccc9788914c2d27a695eb8c5d03105
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 21, 2021 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 21, 2021 12:44:52 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 21, 2021 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 21, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 21, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@945711342]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1394176916]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 21, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 21, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 21, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 21, 2021 12:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 21, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 21, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 21, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6331853008731212232.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-YC70iJPj_Rdvnh6iM_nUeFmoKYqMzWSWPerLpAZMR9M.jar
    Sep 21, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 21, 2021 12:45:01 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 21, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 43788cc3b0ed4102ad1b13e5829121cdc205e16015bd5c53b166657a9f7eef9d> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Q3iMw7DtQQKtGxPlgpEhzcIF4WAVvVxTsWZlep9-750.pb
    Sep 21, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 21, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 21, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 21, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 21, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-20_17_45_04-15111700924769977442?project=apache-beam-testing
    Sep 21, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-20_17_45_04-15111700924769977442
    Sep 21, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-20_17_45_04-15111700924769977442
    Sep 21, 2021 12:45:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-21T00:45:07.930Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:45:13.616Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:45:14.252Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:45:14.301Z: Expanding GroupByKey operations into optimizable parts.
    Sep 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:45:14.327Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:45:14.398Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:45:14.437Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:45:14.469Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:45:14.790Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 21, 2021 12:45:16 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:45:14.865Z: Starting 5 workers in us-central1-a...
    Sep 21, 2021 12:45:45 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:45:45.174Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 21, 2021 12:45:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:45:56.797Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 21, 2021 12:45:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:45:56.814Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 21, 2021 12:46:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:46:07.027Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 21, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:46:32.670Z: Workers have started successfully.
    Sep 21, 2021 12:46:34 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:46:32.698Z: Workers have started successfully.
    Sep 21, 2021 12:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:47:01.532Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 21, 2021 12:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:47:01.667Z: Cleaning up.
    Sep 21, 2021 12:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:47:01.732Z: Stopping worker pool...
    Sep 21, 2021 12:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:49:26.143Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 21, 2021 12:49:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-21T00:49:26.185Z: Worker pool stopped.
    Sep 21, 2021 12:49:33 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-20_17_45_04-15111700924769977442 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): d992d211-7649-465a-8390-80dd62e59b54 and timestamp: 2021-09-21T00:49:33.364000000Z:
                     Metric:                    Value:
                   read_time                     8.605
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 21, 2021 12:49:33 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 7,5,main]) completed. Took 4 mins 46.265 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 16s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/qhqeuealhlp4y

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2449

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2449/display/redirect?page=changes>

Changes:

[danthev] Fix flaky test.

[danthev] Fix lint errors.


------------------------------------------
[...truncated 341.10 KB...]
Gradle Test Executor 2 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 4dccc9788914c2d27a695eb8c5d03105
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 2'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 2'
Successfully started process 'Gradle Test Executor 2'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 20, 2021 6:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 20, 2021 6:45:12 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 20, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 20, 2021 6:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 6:45:15 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 6:45:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 20, 2021 6:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 6:45:16 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 6:45:16 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 20, 2021 6:45:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@784892670]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 20, 2021 6:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 20, 2021 6:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 6:45:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 20, 2021 6:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 20, 2021 6:45:17 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 6:45:17 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 20, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1151017424]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 20, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 20, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 20, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 20, 2021 6:45:18 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 20, 2021 6:45:19 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 20, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 20, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 20, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test6833087523479725486.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-T_fyMn4O_viNZtHxu6t49Ixm9n1a9rctQNT_ylYHCRQ.jar
    Sep 20, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 20, 2021 6:45:22 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 20, 2021 6:45:22 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 1e2f0f48131d49f93a6232d0f4591b2f9516fe4f6ade6fd0c0374df1eff8dd41> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Hi8PSBMdSfk6YjLQ9FkbL5UW_k9q3m_QwDdN8e_43UE.pb
    Sep 20, 2021 6:45:24 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 20, 2021 6:45:24 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 20, 2021 6:45:24 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 20, 2021 6:45:25 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 20, 2021 6:45:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-20_11_45_25-14236663929264241551?project=apache-beam-testing
    Sep 20, 2021 6:45:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-20_11_45_25-14236663929264241551
    Sep 20, 2021 6:45:27 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-20_11_45_25-14236663929264241551
    Sep 20, 2021 6:45:32 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-20T18:45:29.646Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 20, 2021 6:45:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:45:36.337Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 20, 2021 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:45:37.252Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 20, 2021 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:45:37.291Z: Expanding GroupByKey operations into optimizable parts.
    Sep 20, 2021 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:45:37.323Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 20, 2021 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:45:37.397Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 20, 2021 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:45:37.463Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 20, 2021 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:45:37.500Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 20, 2021 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:45:37.923Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 20, 2021 6:45:38 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:45:38.017Z: Starting 5 workers in us-central1-c...
    Sep 20, 2021 6:46:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:46:08.754Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 20, 2021 6:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:46:11.080Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 20, 2021 6:46:13 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:46:11.112Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 20, 2021 6:46:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:46:21.360Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 20, 2021 6:46:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:46:46.062Z: Workers have started successfully.
    Sep 20, 2021 6:46:48 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:46:46.098Z: Workers have started successfully.
    Sep 20, 2021 6:47:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:47:17.485Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 20, 2021 6:47:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:47:17.636Z: Cleaning up.
    Sep 20, 2021 6:47:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:47:17.735Z: Stopping worker pool...
    Sep 20, 2021 6:49:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:49:41.896Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 20, 2021 6:49:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T18:49:41.944Z: Worker pool stopped.
    Sep 20, 2021 6:49:48 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-20_11_45_25-14236663929264241551 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): c2953af7-9466-40b3-a259-ecb9c63bd96e and timestamp: 2021-09-20T18:49:48.800000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     8.797

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 20, 2021 6:49:49 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 2 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.022 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':',5,main]) completed. Took 4 mins 41.483 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 31s
152 actionable tasks: 98 executed, 54 from cache

Publishing build scan...
https://gradle.com/s/almlfcm2sl7aw

Stopped 1 worker daemon(s).
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2448

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2448/display/redirect>

Changes:


------------------------------------------
[...truncated 338.15 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 2e2b2489cf9d882242349157c9769a31
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 20, 2021 12:44:51 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 20, 2021 12:44:52 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 20, 2021 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 20, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 12:44:55 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 20, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 12:44:56 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 20, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@945711342]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 20, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 20, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 20, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 20, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 20, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1394176916]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 20, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 20, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 20, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 20, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 20, 2021 12:44:59 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 20, 2021 12:45:01 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 20, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.34.0-SNAPSHOT.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.34.0-SNAPSHOT-h5Vm59YQQa-RcmHCCG09Iw6skDekxZqFrPsGtnvUHB4.jar
    Sep 20, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7769499112067325841.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-qu_X7N3QbjtJXZki0B2PaWP8mpBRnGZrS80fVAajjhY.jar
    Sep 20, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 20, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/io/mongodb/build/libs/beam-sdks-java-io-mongodb-2.34.0-SNAPSHOT-tests.jar> to gs://temp-storage-for-perf-tests/loadtests/staging/beam-sdks-java-io-mongodb-2.34.0-SNAPSHOT-tests-Lg8Zg7us2s6VPtPB7f5WrJKFl5dhDd82wMfxmGkALkE.jar
    Sep 20, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 247 files cached, 3 files newly uploaded in 0 seconds
    Sep 20, 2021 12:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 20, 2021 12:45:02 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 8a69b64dcbf8da07a1892edcd7d5c82dce72df1e1fad1e7147d2528951f5a94b> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-imm2Tcv42gehiS7c19XILc5y3x4frR5xR9JSiVH1qUs.pb
    Sep 20, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 20, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 20, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 20, 2021 12:45:05 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 20, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-20_05_45_05-6250377755695887449?project=apache-beam-testing
    Sep 20, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-20_05_45_05-6250377755695887449
    Sep 20, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-20_05_45_05-6250377755695887449
    Sep 20, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-20T12:45:10.026Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 20, 2021 12:45:15 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:45:15.234Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 20, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:45:16.153Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 20, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:45:16.185Z: Expanding GroupByKey operations into optimizable parts.
    Sep 20, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:45:16.219Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 20, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:45:16.296Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 20, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:45:16.324Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 20, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:45:16.347Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 20, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:45:16.682Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 20, 2021 12:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:45:16.761Z: Starting 5 workers in us-central1-a...
    Sep 20, 2021 12:45:51 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:45:50.100Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 20, 2021 12:46:05 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:46:05.463Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 20, 2021 12:46:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:46:31.254Z: Workers have started successfully.
    Sep 20, 2021 12:46:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:46:31.284Z: Workers have started successfully.
    Sep 20, 2021 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:46:58.896Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 20, 2021 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:46:59.032Z: Cleaning up.
    Sep 20, 2021 12:47:01 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:46:59.103Z: Stopping worker pool...
    Sep 20, 2021 12:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:49:16.508Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 20, 2021 12:49:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T12:49:16.555Z: Worker pool stopped.
    Sep 20, 2021 12:49:21 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-20_05_45_05-6250377755695887449 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 88308b99-e8a2-4d5c-887d-020e57452cfa and timestamp: 2021-09-20T12:49:22.035000000Z:
                     Metric:                    Value:
                   read_time                     8.095
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 20, 2021 12:49:22 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 34.464 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 5s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/m5j2k4dd623jc

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2447

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2447/display/redirect>

Changes:


------------------------------------------
[...truncated 340.32 KB...]
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 20, 2021 6:46:18 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 20, 2021 6:46:21 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 20, 2021 6:46:22 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 20, 2021 6:46:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 6:46:28 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 6:46:30 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 20, 2021 6:46:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 6:46:30 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 6:46:31 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 20, 2021 6:46:32 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1657634523]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 20, 2021 6:46:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 20, 2021 6:46:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 6:46:33 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 20, 2021 6:46:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 20, 2021 6:46:33 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 6:46:33 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 20, 2021 6:46:34 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@53688365]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 20, 2021 6:46:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 6:46:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 6:46:34 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 20, 2021 6:46:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 6:46:34 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 6:46:34 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 20, 2021 6:46:35 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 20, 2021 6:46:35 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 20, 2021 6:46:36 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 20, 2021 6:46:42 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 20, 2021 6:46:42 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 20, 2021 6:46:43 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test7183188378273839349.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-mgZjTt6naUChfRxKldtKZjTKUPteKwyKmF9nMkyzV5s.jar
    Sep 20, 2021 6:46:46 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 4 seconds
    Sep 20, 2021 6:46:46 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 20, 2021 6:46:46 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash cc41988f5af6504fd7e3beb3570ca172c51ab19d5eb56a7850817ae10f0f9f93> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-zEGYj1r2UE_X476zVwyhcsUasZ1etWp4UIF64Q8Pn5M.pb
    Sep 20, 2021 6:46:51 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 20, 2021 6:46:51 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 20, 2021 6:46:51 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 20, 2021 6:46:51 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 20, 2021 6:46:52 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-19_23_46_51-13258525420254987853?project=apache-beam-testing
    Sep 20, 2021 6:46:52 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-19_23_46_51-13258525420254987853
    Sep 20, 2021 6:46:52 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-19_23_46_51-13258525420254987853
    Sep 20, 2021 6:46:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-20T06:46:55.234Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 20, 2021 6:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:02.911Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 20, 2021 6:47:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:03.558Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 20, 2021 6:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:03.589Z: Expanding GroupByKey operations into optimizable parts.
    Sep 20, 2021 6:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:03.618Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 20, 2021 6:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:03.693Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 20, 2021 6:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:03.728Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 20, 2021 6:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:03.761Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 20, 2021 6:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:04.160Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 20, 2021 6:47:05 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:04.239Z: Starting 5 workers in us-central1-c...
    Sep 20, 2021 6:47:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:33.400Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 20, 2021 6:47:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:33.643Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 20, 2021 6:47:35 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:33.661Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 20, 2021 6:47:44 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:47:43.939Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 20, 2021 6:48:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:48:09.297Z: Workers have started successfully.
    Sep 20, 2021 6:48:12 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:48:09.320Z: Workers have started successfully.
    Sep 20, 2021 6:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:48:37.512Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 20, 2021 6:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:48:37.640Z: Cleaning up.
    Sep 20, 2021 6:48:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:48:37.719Z: Stopping worker pool...
    Sep 20, 2021 6:50:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:50:55.307Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 20, 2021 6:50:57 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T06:50:55.353Z: Worker pool stopped.
    Sep 20, 2021 6:51:02 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-19_23_46_51-13258525420254987853 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): f3ce399f-6652-4a1b-84d4-dbe9723458ab and timestamp: 2021-09-20T06:51:02.597000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     6.959

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 20, 2021 6:51:03 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 4,5,main]) completed. Took 4 mins 55.124 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 28s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/kvqadneeh5zxe

Build cache (/home/jenkins/.gradle/caches/build-cache-1) removing files not accessed on or after Mon Sep 13 06:44:48 UTC 2021.
Invalidating in-memory cache of /home/jenkins/.gradle/caches/journal-1/file-access.bin
Build cache (/home/jenkins/.gradle/caches/build-cache-1) cleaned up in 0.209 secs.
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2446

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2446/display/redirect>

Changes:


------------------------------------------
[...truncated 337.33 KB...]
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 2e2b2489cf9d882242349157c9769a31
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 20, 2021 12:44:51 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 20, 2021 12:44:51 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 20, 2021 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 20, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 20, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 20, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@784892670]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1151017424]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 20, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 20, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 20, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 20, 2021 12:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 20, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 20, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 20, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4426132592971236140.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-QoyyI3kA-wH4jxVC-pHdnJGnQBfa-IabpqcP35qDWpw.jar
    Sep 20, 2021 12:45:01 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 20, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 20, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 79e55825c33f8a835ffa6548fb737e7e22279d368b1a178a8d89e1cbe4607272> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-eeVYJcM_ioNf-mVI-3N-fiInnTaLGheKjYnhy-RgcnI.pb
    Sep 20, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 20, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 20, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 20, 2021 12:45:04 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 20, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-19_17_45_04-16828390132500595680?project=apache-beam-testing
    Sep 20, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-19_17_45_04-16828390132500595680
    Sep 20, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-19_17_45_04-16828390132500595680
    Sep 20, 2021 12:45:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-20T00:45:08.100Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 20, 2021 12:45:15 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:15.297Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 20, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:16.133Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 20, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:16.173Z: Expanding GroupByKey operations into optimizable parts.
    Sep 20, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:16.209Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 20, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:16.270Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 20, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:16.295Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 20, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:16.319Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 20, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:16.599Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 20, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:16.675Z: Starting 5 workers in us-central1-c...
    Sep 20, 2021 12:45:32 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:32.094Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 20, 2021 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:47.154Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 20, 2021 12:45:48 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:47.176Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 20, 2021 12:45:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:45:57.473Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 20, 2021 12:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:46:24.753Z: Workers have started successfully.
    Sep 20, 2021 12:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:46:24.793Z: Workers have started successfully.
    Sep 20, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:46:57.674Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 20, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:46:57.839Z: Cleaning up.
    Sep 20, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:46:57.910Z: Stopping worker pool...
    Sep 20, 2021 12:49:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:49:12.687Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 20, 2021 12:49:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-20T00:49:12.730Z: Worker pool stopped.
    Sep 20, 2021 12:49:21 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-19_17_45_04-16828390132500595680 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): db669282-1e3e-44f4-8639-0640c7fa31b8 and timestamp: 2021-09-20T00:49:21.160000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.535

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 20, 2021 12:49:21 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.023 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 34.083 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 4s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/o6ouf5zpkyv3q

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2445

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2445/display/redirect>

Changes:


------------------------------------------
[...truncated 337.53 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 2e2b2489cf9d882242349157c9769a31
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 19, 2021 6:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 19, 2021 6:44:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 19, 2021 6:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 6:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 19, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 6:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 19, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@945711342]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 19, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 19, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 19, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 19, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 19, 2021 6:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1394176916]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 19, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 19, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 19, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 19, 2021 6:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 19, 2021 6:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 19, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 19, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 19, 2021 6:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test1653160675735518796.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-iEaPyr1lnZzA_UmSRUXQmZqPy8kQiRwdjh7sc0PoCU0.jar
    Sep 19, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 19, 2021 6:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 19, 2021 6:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103998 bytes, hash 8d6e01620c12c377ee6239240cec7d82c62380ebf854e8819204ec5eb64893fa> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-jW4BYgwSw3fuYjkkDOx9gsYjgOv4VOiBkgTsXrZIk_o.pb
    Sep 19, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 19, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 19, 2021 6:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 19, 2021 6:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 19, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-19_11_45_07-12541032630070109918?project=apache-beam-testing
    Sep 19, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-19_11_45_07-12541032630070109918
    Sep 19, 2021 6:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-19_11_45_07-12541032630070109918
    Sep 19, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-19T18:45:10.752Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 19, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:45:16.403Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 19, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:45:17.162Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 19, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:45:17.203Z: Expanding GroupByKey operations into optimizable parts.
    Sep 19, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:45:17.241Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 19, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:45:17.317Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 19, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:45:17.352Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 19, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:45:17.387Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 19, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:45:17.741Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 19, 2021 6:45:18 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:45:17.820Z: Starting 5 workers in us-central1-a...
    Sep 19, 2021 6:45:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:45:39.325Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 19, 2021 6:45:58 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:45:56.154Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 19, 2021 6:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:46:22.128Z: Workers have started successfully.
    Sep 19, 2021 6:46:23 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:46:22.156Z: Workers have started successfully.
    Sep 19, 2021 6:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:46:49.822Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 19, 2021 6:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:46:49.927Z: Cleaning up.
    Sep 19, 2021 6:46:50 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:46:50.010Z: Stopping worker pool...
    Sep 19, 2021 6:49:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:49:06.639Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 19, 2021 6:49:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T18:49:06.690Z: Worker pool stopped.
    Sep 19, 2021 6:49:12 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-19_11_45_07-12541032630070109918 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 18942ca1-42c4-41c2-91fe-a5601972494c and timestamp: 2021-09-19T18:49:12.536000000Z:
                     Metric:                    Value:
                   read_time                     6.595
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 19, 2021 6:49:13 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.024 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.029 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 23.177 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 4m 53s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/2yqb7d7vumxq4

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2444

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2444/display/redirect>

Changes:


------------------------------------------
[...truncated 337.50 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 2e2b2489cf9d882242349157c9769a31
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 19, 2021 12:44:53 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 19, 2021 12:44:54 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 19, 2021 12:44:54 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 19, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 12:44:57 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 19, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 12:44:58 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 19, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@784892670]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 19, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 19, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 19, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 19, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 19, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1151017424]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 19, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 12:44:59 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 19, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 19, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 19, 2021 12:45:00 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 19, 2021 12:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 19, 2021 12:45:03 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 19, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 19, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test2385287251749945531.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-biuKFtfxebcbszDzssfOEHX30Yc7RM-ZnvJd_N3pXX0.jar
    Sep 19, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 19, 2021 12:45:04 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 19, 2021 12:45:04 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash ec3685dd45af0a8234e46f2df0a7b05dd59e901163188197770e944b89a26c89> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-7DaF3UWvCoI05G8t8KewXdWekBFjGIGXdw6US4mibIk.pb
    Sep 19, 2021 12:45:06 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 19, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 19, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 19, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 19, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-19_05_45_07-15457158388052488107?project=apache-beam-testing
    Sep 19, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-19_05_45_07-15457158388052488107
    Sep 19, 2021 12:45:08 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-19_05_45_07-15457158388052488107
    Sep 19, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-19T12:45:10.648Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 19, 2021 12:45:17 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:45:16.699Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 19, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:45:17.465Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 19, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:45:17.496Z: Expanding GroupByKey operations into optimizable parts.
    Sep 19, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:45:17.522Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 19, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:45:17.594Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 19, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:45:17.622Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 19, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:45:17.655Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 19, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:45:17.987Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 19, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:45:18.057Z: Starting 5 workers in us-central1-a...
    Sep 19, 2021 12:45:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:45:22.370Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 19, 2021 12:46:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:46:05.281Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 19, 2021 12:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:46:35.326Z: Workers have started successfully.
    Sep 19, 2021 12:46:36 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:46:35.354Z: Workers have started successfully.
    Sep 19, 2021 12:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:47:02.720Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 19, 2021 12:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:47:02.859Z: Cleaning up.
    Sep 19, 2021 12:47:03 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:47:02.920Z: Stopping worker pool...
    Sep 19, 2021 12:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:49:30.212Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 19, 2021 12:49:31 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T12:49:30.248Z: Worker pool stopped.
    Sep 19, 2021 12:49:35 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-19_05_45_07-15457158388052488107 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 946bdbb9-ad1a-4629-82a7-6aa6743acae4 and timestamp: 2021-09-19T12:49:35.342000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.918

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 19, 2021 12:49:35 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 4 mins 46.36 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 17s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/4kjmt3h3hs2n4

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2443

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2443/display/redirect>

Changes:


------------------------------------------
[...truncated 341.79 KB...]
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 19, 2021 6:45:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 19, 2021 6:45:54 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 19, 2021 6:45:54 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 19, 2021 6:45:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 6:45:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 6:46:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 19, 2021 6:46:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 6:46:00 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 6:46:00 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 19, 2021 6:46:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@127179475]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 19, 2021 6:46:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 19, 2021 6:46:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 6:46:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 19, 2021 6:46:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 19, 2021 6:46:01 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 6:46:01 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 19, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@868637325]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 19, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 19, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 19, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 19, 2021 6:46:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 19, 2021 6:46:03 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 19, 2021 6:46:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 19, 2021 6:46:10 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 19, 2021 6:46:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5363249692408335901.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-y-B-2DhdolkcxTsYTbZYx3rzcYpbpnp9bQwyiRUcOKM.jar
    Sep 19, 2021 6:46:11 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 19, 2021 6:46:11 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 19, 2021 6:46:11 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 978c84376bd852fc1fde085c4f5a152de5fe925360e243553909af8e8a35a2fa> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-l4yEN2vYUvwf3ghcT1oVLeX-klNg4kNVOQmvjoo1ovo.pb
    Sep 19, 2021 6:46:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 19, 2021 6:46:14 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 19, 2021 6:46:14 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 19, 2021 6:46:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 19, 2021 6:46:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-18_23_46_14-17703222293438978451?project=apache-beam-testing
    Sep 19, 2021 6:46:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-18_23_46_14-17703222293438978451
    Sep 19, 2021 6:46:15 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-18_23_46_14-17703222293438978451
    Sep 19, 2021 6:46:19 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-19T06:46:18.008Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 19, 2021 6:46:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:46:27.446Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 19, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:46:28.273Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 19, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:46:28.321Z: Expanding GroupByKey operations into optimizable parts.
    Sep 19, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:46:28.347Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 19, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:46:28.422Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 19, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:46:28.455Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 19, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:46:28.483Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 19, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:46:28.799Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 19, 2021 6:46:29 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:46:28.889Z: Starting 5 workers in us-central1-c...
    Sep 19, 2021 6:46:58 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:46:58.065Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 19, 2021 6:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:47:07.163Z: Autoscaling: Raised the number of workers to 2 based on the rate of progress in the currently running stage(s).
    Sep 19, 2021 6:47:08 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:47:07.190Z: Resized worker pool to 2, though goal was 5.  This could be a quota issue.
    Sep 19, 2021 6:47:18 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:47:17.457Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 19, 2021 6:47:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:47:42.226Z: Workers have started successfully.
    Sep 19, 2021 6:47:42 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:47:42.260Z: Workers have started successfully.
    Sep 19, 2021 6:48:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:48:10.097Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 19, 2021 6:48:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:48:10.231Z: Cleaning up.
    Sep 19, 2021 6:48:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:48:10.310Z: Stopping worker pool...
    Sep 19, 2021 6:50:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:50:36.446Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 19, 2021 6:50:37 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T06:50:36.489Z: Worker pool stopped.
    Sep 19, 2021 6:50:42 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-18_23_46_14-17703222293438978451 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 70f996ce-5135-4209-aadf-ae60dd06f90f and timestamp: 2021-09-19T06:50:42.162000000Z:
                     Metric:                    Value:
                   read_time                     7.234
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 19, 2021 6:50:42 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.032 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 57.09 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 6m 10s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/peudu53fkfzye

Build cache (/home/jenkins/.gradle/caches/build-cache-1) removing files not accessed on or after Sun Sep 12 06:44:40 UTC 2021.
Build cache (/home/jenkins/.gradle/caches/build-cache-1) cleaned up in 0.83 secs.
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2442

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2442/display/redirect>

Changes:


------------------------------------------
[...truncated 339.45 KB...]
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 19, 2021 12:44:52 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 19, 2021 12:44:52 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 19, 2021 12:44:53 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 19, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 12:44:55 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 19, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 12:44:56 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 19, 2021 12:44:57 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@784892670]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1151017424]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 19, 2021 12:44:58 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 19, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 19, 2021 12:44:59 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 19, 2021 12:44:59 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 19, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 19, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 19, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test659937586620404907.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-SR_QdYYePgNPgrgs2medifMEAsLCBllgkgYMm5kJfGE.jar
    Sep 19, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.rnorth.visible-assertions/visible-assertions/2.1.2/20d31a578030ec8e941888537267d3123c2ad1c1/visible-assertions-2.1.2.jar to gs://temp-storage-for-perf-tests/loadtests/staging/visible-assertions-2.1.2-RQSulosjfNzcto_1sHqmOr5JkvkHp3w9YSCqm5BBQBw.jar
    Sep 19, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/org.rnorth.duct-tape/duct-tape/1.0.8/92edc22a9ab2f3e17c9bf700aaee377d50e8b530/duct-tape-1.0.8.jar to gs://temp-storage-for-perf-tests/loadtests/staging/duct-tape-1.0.8-Mc7xLd7JedH4bXz3CMQaF9pSPQXGhf1mQunQsq3bckA.jar
    Sep 19, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.java.dev.jna/jna/5.5.0/e0845217c4907822403912ad6828d8e0b256208/jna-5.5.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jna-5.5.0-swj66_5O1AnehBDgpjLRZLISawNfbqz_lo05CMr7TZ4.jar
    Sep 19, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/net.java.dev.jna/jna-platform/4.0.0/deb6bf66918989b50209b8c9aaf3b2561af7f011/jna-platform-4.0.0.jar to gs://temp-storage-for-perf-tests/loadtests/staging/jna-platform-4.0.0-B21i7Yfna9yzdQ_-gKpJXuIRK9chJmOVTWVH9IJEkuk.jar
    Sep 19, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 245 files cached, 5 files newly uploaded in 0 seconds
    Sep 19, 2021 12:45:02 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 19, 2021 12:45:02 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103998 bytes, hash 05ff7e70ecae048a27522839ee324f50718aa4f7a7a493521b27da3ef5c1505e> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-Bf9-cOyuBIonUig57jJPUHGKpPenpJNSGyfaPvXBUF4.pb
    Sep 19, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 19, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 19, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 19, 2021 12:45:05 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 19, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-18_17_45_05-4344567043694350364?project=apache-beam-testing
    Sep 19, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-18_17_45_05-4344567043694350364
    Sep 19, 2021 12:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-18_17_45_05-4344567043694350364
    Sep 19, 2021 12:45:10 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-19T00:45:08.994Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 19, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:45:16.034Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 19, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:45:16.877Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 19, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:45:16.910Z: Expanding GroupByKey operations into optimizable parts.
    Sep 19, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:45:16.934Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 19, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:45:16.995Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 19, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:45:17.032Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 19, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:45:17.072Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 19, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:45:17.440Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 19, 2021 12:45:17 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:45:17.521Z: Starting 5 workers in us-central1-c...
    Sep 19, 2021 12:45:40 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:45:37.426Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 19, 2021 12:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:45:50.546Z: Autoscaling: Raised the number of workers to 1 based on the rate of progress in the currently running stage(s).
    Sep 19, 2021 12:45:52 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:45:50.578Z: Resized worker pool to 1, though goal was 5.  This could be a quota issue.
    Sep 19, 2021 12:46:03 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:46:00.851Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 19, 2021 12:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:46:26.464Z: Workers have started successfully.
    Sep 19, 2021 12:46:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:46:26.520Z: Workers have started successfully.
    Sep 19, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:46:58.172Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 19, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:46:58.534Z: Cleaning up.
    Sep 19, 2021 12:47:00 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:46:58.630Z: Stopping worker pool...
    Sep 19, 2021 12:49:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:49:14.155Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 19, 2021 12:49:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-19T00:49:14.228Z: Worker pool stopped.
    Sep 19, 2021 12:49:54 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-18_17_45_05-4344567043694350364 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 0b513e5e-647a-4717-a010-2f08d542684d and timestamp: 2021-09-19T00:49:54.400000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     9.793

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 19, 2021 12:49:54 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 11,5,main]) completed. Took 5 mins 6.486 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 37s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/bxeybgq7gulgm

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2441

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2441/display/redirect>

Changes:


------------------------------------------
[...truncated 338.26 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 2e2b2489cf9d882242349157c9769a31
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 18, 2021 6:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 18, 2021 6:45:01 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 18, 2021 6:45:02 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 18, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 6:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 18, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 6:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 18, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@784892670]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 18, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 18, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 18, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 18, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 6:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 18, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1151017424]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 18, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 18, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 18, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 18, 2021 6:45:08 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 18, 2021 6:45:09 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 18, 2021 6:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 18, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 18, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test5979530977339199524.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-UYkmvEai9tqk1NB18q5tE65KTYKyeQG_fPS1avJAKWo.jar
    Sep 18, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 18, 2021 6:45:12 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 18, 2021 6:45:12 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103996 bytes, hash ff9277464ca049e2305ccd0594126e954b43700e85cd983c4e244cbb4fe3ebc2> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-_5J3RkygSeIwXM0FlBJulUtDcA6FzZg8TiRMu0_j68I.pb
    Sep 18, 2021 6:45:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 18, 2021 6:45:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 18, 2021 6:45:15 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 18, 2021 6:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 18, 2021 6:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-18_11_45_15-16673422427586790898?project=apache-beam-testing
    Sep 18, 2021 6:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-18_11_45_15-16673422427586790898
    Sep 18, 2021 6:45:16 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-18_11_45_15-16673422427586790898
    Sep 18, 2021 6:45:21 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-18T18:45:19.387Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 18, 2021 6:45:26 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:45:25.862Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 18, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:45:26.788Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 18, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:45:26.822Z: Expanding GroupByKey operations into optimizable parts.
    Sep 18, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:45:26.845Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 18, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:45:26.909Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 18, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:45:26.940Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 18, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:45:26.974Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 18, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:45:27.319Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 18, 2021 6:45:28 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:45:27.388Z: Starting 5 workers in us-central1-c...
    Sep 18, 2021 6:45:45 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:45:44.268Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 18, 2021 6:46:11 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:46:11.422Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 18, 2021 6:46:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:46:37.715Z: Workers have started successfully.
    Sep 18, 2021 6:46:39 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:46:37.767Z: Workers have started successfully.
    Sep 18, 2021 6:47:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:47:07.698Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 18, 2021 6:47:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:47:07.845Z: Cleaning up.
    Sep 18, 2021 6:47:09 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:47:07.934Z: Stopping worker pool...
    Sep 18, 2021 6:49:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:49:24.022Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 18, 2021 6:49:24 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T18:49:24.058Z: Worker pool stopped.
    Sep 18, 2021 6:49:30 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-18_11_45_15-16673422427586790898 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 2df1b947-7957-404a-a6bb-ecfb2f4fe2a4 and timestamp: 2021-09-18T18:49:30.288000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.433

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 18, 2021 6:49:31 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.028 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.038 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 6,5,main]) completed. Took 4 mins 35.478 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 14s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/4sgrafoyp3qho

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2440

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2440/display/redirect>

Changes:


------------------------------------------
[...truncated 338.63 KB...]

> Task :sdks:java:extensions:sql:perf-tests:testClasses UP-TO-DATE
Skipping task ':sdks:java:extensions:sql:perf-tests:testClasses' as it has no actions.
:sdks:java:extensions:sql:perf-tests:testClasses (Thread[Daemon worker,5,main]) completed. Took 0.0 secs.
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) started.
Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest
Custom actions are attached to task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 2e2b2489cf9d882242349157c9769a31
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 18, 2021 12:45:00 PM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 18, 2021 12:45:00 PM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 18, 2021 12:45:01 PM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 12:45:04 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 18, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 12:45:05 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 18, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@784892670]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 18, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 18, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 18, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 18, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 18, 2021 12:45:06 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1151017424]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 18, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 18, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 18, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 18, 2021 12:45:07 PM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 18, 2021 12:45:07 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 18, 2021 12:45:10 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 18, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 18, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test508090196888883738.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-j8lI1z78mx1qReg8w20b9ImbvZUa94tSU87P0rkRTXs.jar
    Sep 18, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 249 files cached, 1 files newly uploaded in 0 seconds
    Sep 18, 2021 12:45:11 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 18, 2021 12:45:11 PM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <104004 bytes, hash c0595ef0cd231df1819f851e89b0e4f83ebe911b6543e7b429798027b86e9438> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-wFle8M0jHfGBn4UeibDk-D6-kRtlQ-e0KXmAJ7hulDg.pb
    Sep 18, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 18, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 18, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 18, 2021 12:45:14 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 18, 2021 12:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-18_05_45_14-6325844803791487056?project=apache-beam-testing
    Sep 18, 2021 12:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-18_05_45_14-6325844803791487056
    Sep 18, 2021 12:45:15 PM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-18_05_45_14-6325844803791487056
    Sep 18, 2021 12:45:19 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-18T12:45:17.859Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 18, 2021 12:45:25 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:45:24.925Z: Worker configuration: e2-standard-2 in us-central1-c.
    Sep 18, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:45:25.865Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 18, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:45:25.918Z: Expanding GroupByKey operations into optimizable parts.
    Sep 18, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:45:25.953Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 18, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:45:26.024Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 18, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:45:26.060Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 18, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:45:26.094Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 18, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:45:26.446Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 18, 2021 12:45:27 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:45:26.508Z: Starting 5 workers in us-central1-c...
    Sep 18, 2021 12:45:41 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:45:39.478Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 18, 2021 12:46:14 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:46:13.666Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 18, 2021 12:46:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:46:39.620Z: Workers have started successfully.
    Sep 18, 2021 12:46:40 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:46:39.644Z: Workers have started successfully.
    Sep 18, 2021 12:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:47:07.254Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 18, 2021 12:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:47:07.384Z: Cleaning up.
    Sep 18, 2021 12:47:08 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:47:07.463Z: Stopping worker pool...
    Sep 18, 2021 12:49:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:49:41.585Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 18, 2021 12:49:42 PM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T12:49:41.626Z: Worker pool stopped.
    Sep 18, 2021 12:49:47 PM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-18_05_45_14-6325844803791487056 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 058dd872-bbd4-4445-b23f-91500d0acd92 and timestamp: 2021-09-18T12:49:47.520000000Z:
                     Metric:                    Value:
                 fields_read                 4375276.0
                   read_time                     7.455

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 18, 2021 12:49:47 PM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.025 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.035 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Daemon worker,5,main]) completed. Took 4 mins 52.004 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 27s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/wrv4jwmeuvrfw

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org


Build failed in Jenkins: beam_SQLBigQueryIO_Batch_Performance_Test_Java #2439

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/2439/display/redirect>

Changes:


------------------------------------------
[...truncated 339.46 KB...]
Build cache key for task ':sdks:java:extensions:sql:perf-tests:integrationTest' is 2e2b2489cf9d882242349157c9769a31
Task ':sdks:java:extensions:sql:perf-tests:integrationTest' is not up-to-date because:
  Task.upToDateWhen is false.
Starting process 'Gradle Test Executor 1'. Working directory: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests> Command: /usr/lib/jvm/java-8-openjdk-amd64/bin/java -DbeamTestPipelineOptions=["--project=apache-beam-testing","--tempLocation=gs://temp-storage-for-perf-tests/loadtests","--tempRoot=gs://temp-storage-for-perf-tests/loadtests","--metricsBigQueryDataset=beam_performance","--metricsBigQueryTable=sql_bqio_read_java_batch","--runner=DataflowRunner","--maxNumWorkers=5","--numWorkers=5","--autoscalingAlgorithm=NONE","--workerHarnessContainerImage=","--dataflowWorkerJar=<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar","--region=us-central1"]> -Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager -Dorg.gradle.native=false -Xmx2g -Dfile.encoding=UTF-8 -Duser.country=US -Duser.language=en -Duser.variant -ea -cp /home/jenkins/.gradle/caches/6.9.1/workerMain/gradle-worker.jar worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test Executor 1'
Successfully started process 'Gradle Test Executor 1'

Gradle Test Executor 1 started executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT STANDARD_ERROR
    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/runners/google-cloud-dataflow-java/worker/legacy-worker/build/libs/beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class]>
    SLF4J: Found binding in [jar:file:/home/jenkins/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-jdk14/1.7.30/d35953dd2fe54ebe39fdf18cfd82fe6eb35b25ed/slf4j-jdk14-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.JDK14LoggerFactory]

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod STANDARD_ERROR
    Sep 18, 2021 6:44:58 AM org.apache.beam.runners.dataflow.DataflowRunner validateSdkContainerImageOptions
    WARNING: Prefer --sdkContainerImage over deprecated legacy option --workerHarnessContainerImage.
    Sep 18, 2021 6:44:59 AM org.apache.beam.runners.dataflow.options.DataflowPipelineOptions$StagingLocationFactory create
    INFO: No stagingLocation provided, falling back to gcpTempLocation
    Sep 18, 2021 6:45:00 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
    INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 249 files. Enable logging at DEBUG level to see which files will be staged.
    Sep 18, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 6:45:02 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 18, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 18, 2021 6:45:04 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_4/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@945711342]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDirectReadMethod(BigQueryIOPushDownIT.java:162)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod STANDARD_ERROR
    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DEFAULT
    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamCalcRel(expr#0..13=[{inputs}], expr#14=['story':VARCHAR], expr#15=[=($t8, $t14)], expr#16=['job':VARCHAR], expr#17=[=($t8, $t16)], expr#18=[OR($t15, $t17)], expr#19=[2], expr#20=[>($t5, $t19)], expr#21=[AND($t18, $t20)], author=[$t4], type=[$t8], title=[$t0], score=[$t5], $condition=[$t21])
      BeamIOSourceRel(table=[[beam, HACKER_NEWS]])


org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDefaultMethod FAILED
    java.lang.IllegalStateException: Unable to return a default Coder for BeamIOSourceRel_118/ParDo(RowMonitor)/ParMultiDo(RowMonitor).output [PCollection@1394176916]. Correct one of the following root causes:
      No Coder has been manually specified;  you may do so using .setCoder().
      Inferring a Coder from the CoderRegistry failed: Cannot provide a coder for a Beam Row. Please provide a schema instead using PCollection.setRowSchema.
      Using the default output Coder from the producing PTransform failed: PTransform.getOutputCoder called.
        at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.base.Preconditions.checkState(Preconditions.java:507)
        at org.apache.beam.sdk.values.PCollection.getCoder(PCollection.java:286)
        at org.apache.beam.sdk.values.PCollection.finishSpecifying(PCollection.java:117)
        at org.apache.beam.sdk.runners.TransformHierarchy.finishSpecifyingInput(TransformHierarchy.java:154)
        at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:547)
        at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:81)
        at org.apache.beam.sdk.extensions.sql.impl.rel.BeamSqlRelUtils.toPCollection(BeamSqlRelUtils.java:42)
        at org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT.readUsingDefaultMethod(BigQueryIOPushDownIT.java:176)

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQL:
    SELECT `HACKER_NEWS`.`by` AS `author`, `HACKER_NEWS`.`type`, `HACKER_NEWS`.`title`, `HACKER_NEWS`.`score`
    FROM `beam`.`HACKER_NEWS` AS `HACKER_NEWS`
    WHERE (`HACKER_NEWS`.`type` = 'story' OR `HACKER_NEWS`.`type` = 'job') AND `HACKER_NEWS`.`score` > 2
    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery method is set to: DIRECT_READ
    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable <init>
    INFO: BigQuery writeDisposition is set to: WRITE_EMPTY
    Sep 18, 2021 6:45:05 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: SQLPlan>
    LogicalProject(author=[$4], type=[$8], title=[$0], score=[$5])
      LogicalFilter(condition=[AND(OR(=($8, 'story'), =($8, 'job')), >($5, 2))])
        BeamIOSourceRel(table=[[beam, HACKER_NEWS]])

    Sep 18, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
    INFO: BEAMPlan>
    BeamPushDownIOSourceRel(table=[[beam, HACKER_NEWS]], usedFields=[[by, type, title, score]], BigQueryFilter=[[supported{OR(=($8, 'story'), =($8, 'job'))>($5, 2)}, unsupported{}]])

    Sep 18, 2021 6:45:06 AM org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryTable buildIOReader
    INFO: Pushing down the following filter: (type = 'story' OR type = 'job') AND score > 2
    Sep 18, 2021 6:45:06 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Executing pipeline on the Dataflow Service, which will have billing implications related to Google Compute Engine usage and other Google Cloud Services.
    Sep 18, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Uploading 250 files from PipelineOptions.filesToStage to staging location to prepare for execution.
    Sep 18, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil$PackageAttributes forFileToStage
    INFO: Staging custom dataflow-worker.jar as beam-runners-google-cloud-dataflow-java-legacy-worker-2.34.0-SNAPSHOT-WbmBv0W8lCroww4HLV2iheSTEdg9hX2CL8kP5tsmd-0.jar
    Sep 18, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /tmp/test4040703456648660585.zip to gs://temp-storage-for-perf-tests/loadtests/staging/test-2ZQ1RyJ5WY-qkJE4NhYl5MZjBgQtRsIm40FC8zwqkmM.jar
    Sep 18, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading /home/jenkins/.gradle/caches/modules-2/files-2.1/com.alibaba/fastjson/1.2.69/6cb063f1d527ff65bdbb9ea74888a5ffc3f92197/fastjson-1.2.69.jar to gs://temp-storage-for-perf-tests/loadtests/staging/fastjson-1.2.69-KniRdEoDeOAJmpuB3b7U0uGn6GMvb4b4_8-jg-UeScg.jar
    Sep 18, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil stageClasspathElements
    INFO: Staging files complete: 248 files cached, 2 files newly uploaded in 0 seconds
    Sep 18, 2021 6:45:10 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Staging portable pipeline proto to gs://temp-storage-for-perf-tests/loadtests/staging/
    Sep 18, 2021 6:45:10 AM org.apache.beam.runners.dataflow.util.PackageUtil tryStagePackage
    INFO: Uploading <103997 bytes, hash 148d6269386e03b8acc83a543743e9acd29909f92967069cc060211c436c7640> to gs://temp-storage-for-perf-tests/loadtests/staging/pipeline-FI1iaThuA7isyDpUN0PprNKZCfkpZwacwGAhHENsdkA.pb
    Sep 18, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource) as step s1
    Sep 18, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding BeamPushDownIOSourceRel_272/ParDo(RowMonitor) as step s2
    Sep 18, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
    INFO: Adding ParDo(TimeMonitor) as step s3
    Sep 18, 2021 6:45:13 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Dataflow SDK version: 2.34.0-SNAPSHOT
    Sep 18, 2021 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-09-17_23_45_13-8888735768678419727?project=apache-beam-testing
    Sep 18, 2021 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: Submitted job: 2021-09-17_23_45_13-8888735768678419727
    Sep 18, 2021 6:45:14 AM org.apache.beam.runners.dataflow.DataflowRunner run
    INFO: To cancel the job using the 'gcloud' tool, run:
    > gcloud dataflow jobs --project=apache-beam-testing cancel --region=us-central1 2021-09-17_23_45_13-8888735768678419727
    Sep 18, 2021 6:45:21 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    WARNING: 2021-09-18T06:45:19.954Z: The requested max number of workers (5) is ignored as autoscaling is explicitly disabled (autoscalingAlgorithm=NONE).
    Sep 18, 2021 6:45:26 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:45:25.576Z: Worker configuration: e2-standard-2 in us-central1-a.
    Sep 18, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:45:26.327Z: Expanding CoGroupByKey operations into optimizable parts.
    Sep 18, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:45:26.358Z: Expanding GroupByKey operations into optimizable parts.
    Sep 18, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:45:26.385Z: Lifting ValueCombiningMappingFns into MergeBucketsMappingFns
    Sep 18, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:45:26.434Z: Fusing adjacent ParDo, Read, Write, and Flatten operations
    Sep 18, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:45:26.460Z: Fusing consumer BeamPushDownIOSourceRel_272/ParDo(RowMonitor) into BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)
    Sep 18, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:45:26.485Z: Fusing consumer ParDo(TimeMonitor) into BeamPushDownIOSourceRel_272/ParDo(RowMonitor)
    Sep 18, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:45:26.803Z: Executing operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 18, 2021 6:45:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:45:26.876Z: Starting 5 workers in us-central1-a...
    Sep 18, 2021 6:45:54 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:45:53.746Z: Your project already contains 100 Dataflow-created metric descriptors, so new user metrics of the form custom.googleapis.com/* will not be created. However, all user metrics are also available in the metric dataflow.googleapis.com/job/user_counter. If you rely on the custom metrics, you can delete old / unused metric descriptors. See https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.list and https://developers.google.com/apis-explorer/#p/monitoring/v3/monitoring.projects.metricDescriptors.delete
    Sep 18, 2021 6:46:14 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:46:13.043Z: Autoscaling: Raised the number of workers to 5 based on the rate of progress in the currently running stage(s).
    Sep 18, 2021 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:46:38.824Z: Workers have started successfully.
    Sep 18, 2021 6:46:39 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:46:38.850Z: Workers have started successfully.
    Sep 18, 2021 6:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:47:06.775Z: Finished operation BeamPushDownIOSourceRel_272/Read Input BQ Rows with push-down/Read(BigQueryStorageTableSource)+BeamPushDownIOSourceRel_272/ParDo(RowMonitor)+ParDo(TimeMonitor)
    Sep 18, 2021 6:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:47:06.897Z: Cleaning up.
    Sep 18, 2021 6:47:07 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:47:06.976Z: Stopping worker pool...
    Sep 18, 2021 6:49:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:49:26.810Z: Autoscaling: Resized worker pool from 5 to 0.
    Sep 18, 2021 6:49:27 AM org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
    INFO: 2021-09-18T06:49:26.856Z: Worker pool stopped.
    Sep 18, 2021 6:49:32 AM org.apache.beam.runners.dataflow.DataflowPipelineJob logTerminalState
    INFO: Job 2021-09-17_23_45_13-8888735768678419727 finished with status DONE.

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_OUT
    Load test results for test (ID): 9ff13ac3-1a6e-4c0c-a574-0adff4797089 and timestamp: 2021-09-18T06:49:32.795000000Z:
                     Metric:                    Value:
                   read_time                     8.761
                 fields_read                 4375276.0

org.apache.beam.sdk.extensions.sql.meta.provider.bigquery.BigQueryIOPushDownIT > readUsingDirectReadMethodPushDown STANDARD_ERROR
    Sep 18, 2021 6:49:33 AM org.apache.beam.sdk.testutils.publishing.InfluxDBPublisher publishWithCheck
    WARNING: Missing property -- measurement/database. Metrics won't be published.

Gradle Test Executor 1 finished executing tests.

> Task :sdks:java:extensions:sql:perf-tests:integrationTest FAILED

3 tests completed, 2 failed
Finished generating test XML results (0.026 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/test-results/integrationTest>
Generating HTML test report...
Finished generating test html results (0.031 secs) into: <https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest>
:sdks:java:extensions:sql:perf-tests:integrationTest (Thread[Execution worker for ':' Thread 3,5,main]) completed. Took 4 mins 39.42 secs.

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':sdks:java:extensions:sql:perf-tests:integrationTest'.
> There were failing tests. See the report at: file://<https://ci-beam.apache.org/job/beam_SQLBigQueryIO_Batch_Performance_Test_Java/ws/src/sdks/java/extensions/sql/perf-tests/build/reports/tests/integrationTest/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.9.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 5m 11s
152 actionable tasks: 96 executed, 56 from cache

Publishing build scan...
https://gradle.com/s/ioqdzwgbbysx6

Build cache (/home/jenkins/.gradle/caches/build-cache-1) removing files not accessed on or after Sat Sep 11 06:44:26 UTC 2021.
Invalidating in-memory cache of /home/jenkins/.gradle/caches/journal-1/file-access.bin
Build cache (/home/jenkins/.gradle/caches/build-cache-1) cleaned up in 0.341 secs.
Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: builds-unsubscribe@beam.apache.org
For additional commands, e-mail: builds-help@beam.apache.org